sΓ‘bado, 26 de enero de 2019

A Vector Inside

When I finished my blog called “Hey Vector, who do I look like?” using Anki’s Vector and SAP Leonardo’s Machine Learning APIs…I started thinking about what I should work on next…at first…I was obviously running low on ideas…but then…all of a sudden…a nice idea came to me…what if we could control Vector…from the inside? I mean…what if we could simulate that we are inside Vector and that we can see through his eyes and make him move…

That’s how this project started πŸ˜‰


The Idea


So, I knew I wanted to be able to control Vector…I could use Amazon Alexa, but of course…that would leave the “inside” part out…so that was not a choice…then I thought about using Unreal Engine…as I used it on my blog “SAP Leonardo Machine Learning APIs on the Go” where I used SAP Machine Learning APIs, Unreal Engine and Oculus Go to showcase the APIs on SAP Leonardo. Using Unreal Engine and Oculus Go seemed like the perfect combination, so I started working on it.


What are we going to use?


I know that I mentioned many things…so let’s get more information about them πŸ˜‰

Vector

Anki’s next evolution of Cozmo. Vector packs not only more power and more independence but also a microphone, so you can finally talk to him πŸ˜‰ and it also comes with Amazon Alexa…so it’s just an amazing little robot…

Unreal Engine

Unreal Engine is without a doubt “The most powerful creating engine”. Can be programmed using C++ or Blueprints (Visual Programming) and the best of all…it’s totally free! Unless you make a commercial game that sells…then they ask only for the 5%


Blender

Blender is an Open Source 3D Creation Suit. Modeling, rigging, animation, simulation, rendering and a long etcetera…bundled with version 2.8, comes EEVEE (Extra Easy Virtual Environment Engine) a real time rendering engine.


HANA Cloud Platform

SAP’s In-Memory, column-oriented database running on the cloud. That includes Predictive Analysis, Spatial Data Processing, Text Analysis and much more. Also, it’s blazing fast πŸ˜‰


Python3

Interpreted, high-level, general-purpose programming language. It’s the language chosen to code Vector.



The First Problem


As I wanted to see through Vector’s eyes…I needed to display Vector’s video feed on Unreal Engine…I spend some time thinking about how to do that…but in the end I remembered that when you use Microsoft HoloLens and pair it to your laptop for “streaming”, there’s always a delay of 1 or 2 seconds…then of course I remembered that videos are just hundreds of images that are displayed in sequence in a very fast way…I didn't care too much about speed or being close to real time…having a 1 or 2 seconds delay…it’s not a bad thing at all…

Well…the problems continued…I knew that I wanted Vector to take a picture every 1 or 2 seconds…and then that picture should reach Unreal Engine…and despite the fact that I code in Python on my Ubuntu Virtual Machine and Unreal Engine on my regular Windows laptop…I wasn't too sure on sending the image as a file…so…I got the idea of encoding it as Base 64 (Which yes…increases the size…but at least gives you a huge string to deal with) and send it…but how? Well…SAP Cloud Platform is an in-memory database…so it’s pretty fast…why not create a table and some REST APIs to deal with the creation, view and deletion of records…


Every time Vector takes a picture...it gets converted to Base 64 and then send to the cloud, then Unreal Engine read the API, decode the Base 64 back to an image and display it…that lead to the second problem…

The Second Problem


How do I encode a Base 64 image on Unreal Engine? While I know how to use C++ very well…when it comes to Unreal Engine I mostly use Blueprints, which is visual programming and while underneath is C++ not everything is implemented…

Gladly, a quick visit to Unreal’s documentation gave me an answer in the form of a Base 64 encoding/decoding function…but in C++ of course…

But…the good thing about Unreal Engine is that you can create a C++ project…implement your Base 64 encoder class and then start creating Blueprints to consume it…another problem tackled…

That’s what I thought…but then I realized that it wasn’t just a matter of having the picture back as a picture…I actually needed a dynamic material where I could display the images…I browse the web and found some interesting articles…but nothing that could really helped me…in the end…I grab pieces from here and there and my own research…and managed to make it work…

And yes…if you’re wondering…there was another problem…

The Third Problem


Everything was nice and dandy…I test my solution…initial by passing some images in sequence to the cloud and then to Unreal and then using Vector…with a delay of 1 or 2 seconds…it looked like a video running on an old smartphone on a cheap Internet connection in the middle of the desert…good enough for me πŸ˜‰

But…we were supposed to be inside Vector, right? How was I supposed to simulate that? After not too much thinking…I decided to use Blender 2.8 which is on Beta right now 😊 and that comes with EEVEE (Which is an awesome real-time renderer). I made a small “control” room…with some fancy button and panels…a chair where you can seat while controlling “Vector” and a big screen to see what Vector is seeing….

Baking is not working right now…or at least I’m too dumb to figure out how to use it on 2.8 so using textures on Cycles was out of the question…so…I made a test on EEVEE using just plain materials…export them as .FBX and they worked like a charm! So, I stared working and test it on Unreal…of course…I’m not a Blender expert, so while everything looks nice…not all the colors are rendered correctly ☹ At least it looks fairly decent πŸ˜‰

No more problems


Yep…not that everything went nice and smoothly…but at least those were the most critical problems…so now we can actually start with the blog -:P 

Blender and The Control Room


As I said…I used EEVEE rendering on Blender 2.8 Beta to create a control room that would somehow, give the impression of being inside Vector…of course…totally and completely a poetic version because a) Who know how Vector looks inside? b) I don’t think there’s enough room inside vector to fit anything else…

First, I started by putting some buttons, knobs and keys to a panel…then I added some sort of radars along with sliders…


Then, I thought some switches and multi-colored buttons would make a nice addition…


Finally, I added a chair…because you need to sit somewhere, right?


The screen is just a white space…pretty much like in a cinema…


Look, pretty cool on Blender, right? Well…not so much on Unreal…not ugly…but certainly not optimized…probably due to the fact that I merged everything together and exported as a big chunk…but that’s fine…I’m not changing that…I’m lazy 😊


You see…not perfect…but not that bad either πŸ˜‰

Here’s the .FBX file

Creating the Tables and APIs on HANA Cloud Platform


Next step…I created two tables on HANA Cloud Platform…I called the first table “VECTOREYES” because it’s the table that will hold the Base 64 images. Here’s the script to create it…

CREATE COLUMN TABLE "I830502"."VECTOREYES"(
 "TIMESTAMP" LONGDATE CS_LONGDATE NOT NULL,
 "VECTOREYE" CLOB MEMORY THRESHOLD 3000,
 PRIMARY KEY (
  "TIMESTAMP"
 )
) UNLOAD PRIORITY 5 AUTO MERGE;

For the primary key I used a TIMESTAMP basically because if something happens in terms of connection there would be no primary key clashes…

The next table will be called “VECTORCOMMAND” and will hold…the commands that will send to Vector…

CREATE COLUMN TABLE "I830502"."VECTORCOMMAND"(
 "NID" INTEGER CS_INT,
 "COMMAND" NVARCHAR(50),
 PRIMARY KEY (
  "NID"
 )
) UNLOAD PRIORITY 5 AUTO MERGE;

In this case…there’s going to be always one command…so I used a single integer primary key.

With the tables created, we can generate our XS Engine package…and simply call it “VectorEyes”.

Create the following files…

.xsaccess
{
     "exposed" : true,  
                  
     "authentication" :                                            
            {
               "method": "Basic"   
            },
  
     "cache_control" : "must-revalidate", 

     "cors" :                      
            {
             "enabled" : true,
             "allowMethods": [
   "GET",
   "POST",
   "HEAD",
   "OPTIONS"
   ]
            }, 
                     
     "enable_etags" : false,

     "force_ssl" : false,
     
     "prevent_xsrf" : false
}


.xsapp


Yep…not a typo…this is actually totally and completely empty…


AddVectorEye.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var content = $.request.body.asString();
content = JSON.parse(content);

var st = conn.prepareStatement("INSERT INTO \"YourSchema\".\"VECTOREYES\" values(?,?)");

st.setString(1,content.timestamp);
st.setString(2,content.vectoreye);

st.execute();
conn.commit();
st.close();
conn.close();

GetAddVectorEye.xsodata
service namespace "YourSchema"{
 "YourSchema"."VECTOREYES" as "vectoreye";
}

DeleteVectorEye.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var st = conn.prepareStatement("DELETE FROM \"YourSchema\".\"VECTOREYES\"");

st.execute();
conn.commit();
st.close();
conn.close();

With that, we can insert, read and delete the VECTOREYES table. Let’s continue with VECTORCOMMAND table files…

AddVectorCommand.xsjs
$.response.contentType = "text/html";

var nid = $.request.parameters.get("nid");
var command = $.request.parameters.get("command");

var conn = $.db.getConnection();

var st = conn.prepareStatement("INSERT INTO \"YourSchema\".\"VECTORCOMMAND\" values(?,?)");

st.setString(1,nid);
st.setString(2,command);

st.execute();
conn.commit();
st.close();
conn.close();


GetVectorCommand.xsodata
service namespace "YourSchema"{
 "YourSchema"."VECTORCOMMAND" as "vectorcommand";
}


DeleteVectorCommand.xsjs
$.response.contentType = "text/html";

var conn = $.db.getConnection();

var st = conn.prepareStatement("DELETE FROM \"YourSchema\".\"VECTORCOMMAND\"");

st.execute();
conn.commit();
st.close();
conn.close();

That’s it 😊 We simply need to activate it and test it…for sure Postman is the way to go πŸ˜‰

Creating our Unreal Engine project


As I mentioned earlier...I created an empty C++ Project using Mobile/Tablet, Scalable 3D or 2D and No starter content. I used Unreal Engine version 4.21.1 and called the project “VectorOculusGo”




When the project is open, I selected “File --> New C++ Class”, and chose “Actor”.


I called the class “ImageParser” and used the following code for “ImageParser.h” and “ImageParser.cpp”

ImageParser.h
#pragma once

#include "CoreMinimal.h"
#include "GameFramework/Actor.h"
#include "ImageParser.generated.h"

UCLASS()
class VECTOROCULUSGO_API AImageParser : public AActor
{
 GENERATED_BODY()
 
public: 
 // Sets default values for this actor's properties
 AImageParser();

protected:
 // Called when the game starts or when spawned
 virtual void BeginPlay() override;

public: 
 // Called every frame
 virtual void Tick(float DeltaTime) override;

 UFUNCTION(BlueprintCallable, Category = "ImageParser")
  void ParseImage(FString encoded, TArray &decoded);
};

Here we’re creating a function that can be called via Blueprints. It will receive a String and will return an array of integers.

ImageParser.cpp
#include "ImageParser.h"
#include "Misc/Base64.h"

// Sets default values
AImageParser::AImageParser()
{
  // Set this actor to call Tick() every frame.  
        // You can turn this off to improve performance if you don't need it.
 PrimaryActorTick.bCanEverTick = true;

}

// Called when the game starts or when spawned
void AImageParser::BeginPlay()
{
 Super::BeginPlay();
 
}

// Called every frame
void AImageParser::Tick(float DeltaTime)
{
 Super::Tick(DeltaTime);

}

void AImageParser::ParseImage(FString encoded, TArray &decoded)
{
 FBase64::Decode(encoded, decoded);
}

Here, we simply call the Decode method from the Base64 library. This will grab the Base 64 string and converted back into an image.

In order to compile, we just need to right-click on the project name and select “Debug --> Start new instance”.


After the compilation is done, we can simply stop the debugging.

Before we continue…we need to download a library to manage Rest APIs…it’s called JSONQuery and it’s amazing!

Simply close Unreal, go to the project folder and create a new folder called “Plugins”, then download the .zip, unzip it inside the “Plugins” folder file and delete the Binaries and Intermediate folders.

Then, you will need to change the source code a little bit…

Inside the “JSONQuery” folder, go to “Source --> JSONQuery --> Classes --> JsonFieldData.h” and look for “GetRequest”.

After const FString& url add const FString& auth

Then open “Source --> JSONQuery --> Private --> jsonfielddata.cpp” and look for the same “GetRequest”.

Here, add the same const FString& auth.

After the HttpRequest->SetURL(CreateURL(url)); add the following…

HttpRequest->SetHeader(TEXT("Authorization"), auth);

Save both files and open your project. You will get a message saying that part of the code needs to be recompiled. So simply accept and wait a little bit until everything gets compiled 😊

To check that everything is fine, go to “Settings --> Plugins” and go all the way down to find “Project --> Web” and JSON Query should be selected. πŸ˜‰




Awesome, let’s continue.

In order to make our project work on the Oculus Go, we need to setup a couple things.

Setting up the Oculus Go


You may want to setup your Oculus if you haven’t done that already 😊 Here’s a nice link with all the explanation you need…

Setting up Unreal for Oculus Go


We need to install “Code Works for Android” which is actually bundled with your Unreal Installation. So, go “Program Files --> Epic Games --> UE_4.21 --> Engine --> Extras --> AndroidWorks --> Win64” and run “CodeWorksforAndroid-1R7u1-windows.exe”.

You will notice that you are inside the C++ Classes folder, so just click on the folder icon next to it and select “Content”.



Don’t pay attention to the folders for now.

First, save the current map and call it “MainMap”. Then go to “Edit --> Project Settings”. Look for “Maps & Modes” and select “MainMap” in both “Editor Startup Map” and “Game Default Map”.


Then go down and select “Engine --> Input”. On the “Mobile” section set the Default Touch Interface to None.


Move down to “Platforms” and select “Android”. Click on “Configure Now”. Then move to “Android”. Set the minimum and target SDK version to “19”.

Also click on “Enable Fullscreen Immersive on KitKat and above devices” to enable it.

Look for “Configure the AndroidManifest for Deployment to Oculus” and enable it as well.

Now, click on “Android SDK” and check the configuration. If you don’t have the System Variables configured, then simply assign the folder paths.

Finally, go to “Engine --> Rendering” and make sure that “Mobile HDR” is not selected.

If something is not clear, just go to this link πŸ˜‰ 

Alright, now we can finally move on 😊

Creating a Dynamic Material


Click on “Add New --> Material” and call it “Dynamic_Mat”. Once inside the material editor, right-click on an empty space and look for “TextureSampleParameter2D”.




Once created, name it “Texture_Sample”. It will come with a default texture that you can change if you want (But doesn't matter in the end). Simply connect the first output to the “Base Color” of the “Dynamic_Mat” node.


Save it and it will automatically applied. The good thing about this setup is that the Param2D is dynamic πŸ˜‰ 

Creating our first Blueprint


Create a new folder and call it “Blueprints”. Here we’re going to create the screen where the images coming from Vector are going to be displayed.

Press “Add New --> Blueprint Class”.


Instead of choosing “Actor” as the parent class…go down to “All Classes” and look for “Image Parser” and select it as parent class.


Name it “ImageRenderer”.

Once created, go to the Viewport tab and click on “Add Component --> Cube”. Simply change its scale to “0.01, 1.0, 1.0”.



Then switch to the “Event Graph” tab. This is where we are going to build our Blueprints.

But first, we need to create a couple of variables.

CubeMaterial --> Material Instance (Object Reference)


This is going to be the material of the cube that we created.

TempImg --> Texture 2D (Object Reference)


This is where we’re going to store the image after converting it from Base 64 to image.

TempMat --> Material Instance (Object Reference)

This is the dynamic material that is going to be assign to our cube.

ImageJSON --> String

This is the result from calling the API…the Base 64 string.

With the variables ready, we can start creating the first piece of the Blueprint.


Here, we are saying that once our application starts (Event BeginPlay) we’re going to call a function called “Set Timer by Function Name”. This function will call another function every 2.0 seconds (as we ticked the Looping value). The called function will be “MyEvent”.



Here, we are calling the function “MyEvent”, which will call “Get JSON Request” by passing the URL and the Auth. This will be bound to the “OnGetResult” event. The result from the JSON call will be extracted by using Get Object Field, Get Object Array Field, a For Loop and finally a Get String Field in order to get the Base 64 image and store it on the ImageJSON variable.


After setting the ImageJSON variable, we call the API to delete the table. After this…things get interesting…


Here we are calling our C++ Class “Parse Image” like another Blueprint element. We get the value stored on ImageJSON to be decoded as an image. The result of decoding the Base 64 string will go into “Import Buffer as Texture 2D”, which will go into the TempImg variable. After this “Create Dynamic Material Instance” will apply a Dynamic material to our Dynamic_Mat material and assign it to TempMat that will passed as the target of Set Texture Parameter Value, while TempImg will be pass as the value parameter. Finally, a Set Material node will assign the TempMat material to the Cube.

To make it simple…we grab the Base 64 string…convert it into an image…create a dynamic material, use it as the value for our dynamic material and finally we assign this to our cube. Every time we get a new Base 64 value, we will get a new image and our cube will be able to display it πŸ˜‰

Importing our Blender model


Now, we need to simulate that we’re inside Vector…hence…we need to import our Blender .FBX model πŸ˜‰

Simply press Import and select the .FBX file that you can get from here. Press Import All and you will have it on the screen.

Change the following parameters…


Now, add a Point Light with these parameters…


Next, grab the “ImageRenderer” Blueprint and drag it into the screen. Change the parameters like this…


Then, press “Build” and wait till everything (including the lights) get built.


Once the build is done…you will have this…


Awesome! Everything is starting to take shape 😊 But now…we need to add the real Oculus Go support πŸ˜‰

Adding Oculus Go support


So, we configured our project to work in an Oculus Go…but that’s not enough πŸ˜‰ We need to do a couple of extra things…and of course…and most important…we need to add a way to control Vector using the Oculus Go controller 😊

Create a new folder and call it “Modes”. Then create a new “Blueprint Class” but this time choose “Pawn” and name it “Pawn Blueprint” (Smart, huh?).

When it opens up, go to the left section and select “DefaultSceneRoot”, then click on “Add Component” and select “Scene” and change its name to “VRCameraRoot”.

Select “VRCameraRoot” and add a “Camera” component, name it “VRCamera”.

Select “VRCameraRoot” and add a "Motion Controller" component, name it “OculusGoController”.

Select “OculusGoController” and add a "Static Mesh" component, name it “OculusGoMesh”.

To make clear…here’s a screenshot 😊


With the “OculusGoMesh” selected, go to its properties and on the Static Mesh one, choose “OculusGoController” mesh.


After this, we need to create some variables…the first one will “CameraHeight” and will be an editable “Vector”.



The second one will be called “request” and would be a Json Field Data (Object Reference).

Finally, create one called “Lift” of type Boolean and a String variable named “Var”.



If you’re wondering about the open eye next to “CameraHeight”, that simply means that its “Public”, and you can change it by clicking on it.

Now, we can continue on the “Event Graph” tab.


Here, we want that when the applications starts (Event BeginPlay) the tracking origin gets set to our eyes level. The node SetRelativeLocation will be called where the target will be the VRCameraRoot and the new location would be set to the CameraHeight. In other word, what we see its going to be on our eyes level.



When we press the Thumbstick Up or Forward, Left or Right, we assign the result to our Var variable, then we call the Get JSON Request function. The URL would be the API address  plus the value of the Var variable.


Here, we want to click on the “Back” button of the Oculus Controller. The first time we click the “Lift” variable is going to be “False”, so we make it “True”. If its “True” then we send the “up” command. If we click again, we make it “False” and pass the “down” command. This way we can control Vector’s lift handle.

Alright, compile, save and that’s done 😊 We simply need to add it to our scene. So, drag it and change these parameters.


Also, and this is very important…


Auto Possess Player should be Player 0.

Now, press “Build” and wait till everything (including the lights) get built.

Then press Play…and you will see this…



Of course, if you try to move using your mouse…nothing will happen…so you need to send it to your Oculus Go πŸ˜‰

To do that, simply go to Launch and select your device…it will take a long time the first time because all the shaders, Blueprints and so on need to be compiled…but after that, you will be able to put on your headset and look around 😊 Although…you’re not going to see anything on the screen because we still need to get Vector up and running πŸ˜‰

Installing Vector’s SDK


First, make sure Vector is connected to the Internet by using the Vector app…here’s a nice video on how to do that…

Once you check that, kill the app from your phone…as it might interfere with your own application taking control of Vector…

You can install the SDK by doing

python3 -m pip install --user anki_vector

Then…authenticate your Vector by doing…

python3 -m anki_vector.configure

You will be asked for Vector’s name, ip address and serial number. Also, you will be requested for your Anki Cloud credentials.

To get this information, simply put Vector on his charger…and press his top twice. This will give you his name, then lift up and down his handle in order to get the IP. The serial number is on Vector’s bottom.

Creating Vector’s script


This script is the last part of our journey 😊 Simply create a new file called VectorOculusGo.py

VectorOculusGo.py
import anki_vector  #Control Vector
import requests  #Use REST APIs
import json  #Consume JSON
import time  #Manage time
from anki_vector.util import degrees, distance_mm, speed_mmps
import base64 #Encode/Decode images
import datetime  #To get time and data

#URLs to manage upload of Base 64 images and to control Vector using the 
#Oculus Go controller
urlAddEye = "https://YourHANA.ondemand.com/VectorEyes/AddVectorEye.xsjs"
urlGetCommand = "https:// YourHANA.ondemand.com/VectorEyes/
                 GetVectorCommand.xsodata/vectorcommand"
urlDeleteCommand = "https:// YourHANA.ondemand.com/VectorEyes/
                    DeleteVectorCommand.xsjs"
   
def main():
    #We stablish a connection with Vector and enable his camara
    robot = anki_vector.Robot(enable_camera_feed=True)
    #We connect to Vector
    robot.connect()
    i = 0
    #We want this to loop forever…until we close the program
    while i == 0:
        #We instruct Vector to take a pictures
        image = robot.camera.latest_image
        #And save it
        image.save("./img/Temp.png")
        #Once saved, we open it
        with open("./img/Temp.png", "rb") as imageFile:
            #We get the time and create a timestamp
            ts = time.time()
            timestamp = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
            #We enconde the picture as an Base 64 string
            strImg = base64.b64encode(imageFile.read())
            #The payload is the parameters that we are sending to the REST API 
            payload = "{\"timestamp\":\"" + timestamp + "\",\"vectoreye\":\"" + 
                        strImg.decode('ascii') + "\"}"  

            #In the headers, we pass the authentication for the REST API        
            headers = {
                'Content-Type': "application/x-www-form-urlencoded",
                'Authorization': "YourAuthorization",
            }

            #We upload the Base 64 string of the image to the DB
            response = requests.request("POST", urlAddEye, data=payload, headers=headers)
            #We put the application to sleep for 2 seconds just not to overload the DB
            time.sleep(2)
            querystring = {"$format":"json"}
            #Right after uploading the Base 64 string, 
            #we want to get any commands coming through
            response = requests.request("GET", urlGetCommand, headers=headers, 
                                        params=querystring)
            #We convert the response to JSON
            json_response = json.loads(response.text)
            #We need to check if there’s any information first and then extract the command
            try:
                json_text = json_response['d']['results'][0]['COMMAND']
            except:
                json_text = ""
            #Depending on the command, we make Vector move forward, backward or 
            #lift his handle. If the lift was already up, we put it down first…
            if (json_text == 'forward'):
                robot.behavior.drive_straight(distance_mm(50), speed_mmps(50))
            elif (json_text == 'backward'):
                robot.behavior.drive_straight(distance_mm(-50), speed_mmps(50))
            elif(json_text == 'right'):
                robot.behavior.turn_in_place(degrees(-90))
            elif(json_text == 'left'):
                robot.behavior.turn_in_place(degrees(90))
            elif(json_text == 'up'):
                robot.behavior.set_lift_height(0.0)
                robot.behavior.set_lift_height(1.0)
            elif(json_text == 'down'):
                robot.behavior.set_lift_height(0.0)
            #After receiving the command, we simply delete it from the DB
            response = requests.request("GET", urlDeleteCommand, headers=headers)
                
if __name__ == '__main__':
    main()

Nice, the source code is self-explanatory…but still…let’s go through what is going on this application…

We want Vector to take a picture every 2 seconds…once a picture is taken, we want to convert it into a Base 64 string and then along with a Timestamp (which is a date with hours, minutes and seconds) send it to the Database. Once that’s done…we rest for 2 seconds and check if there’s any command available. If there’s one, we make Vector act accordingly…and just to avoid keep repeating the same command…we simply delete it from the Database, so we can simply issue a new command.

Putting it all together


Great! Now we have our application running on the Oculus Go and our Vector ready to execute our script.

So…get ready a Terminal or CMD window with the following line…

python3 VectorOculusGo.py

Put on your Oculus Go headset, grab your controller and the hit “Enter” on your keyboard. Our script will start running and you will see what Vector is looking at…something like this…


I know…that’s actually running on Unreal Engine and not on the Oculus…but that’s what the video is for πŸ˜‰


I hope you like this blog and enjoy controlling Vector from the inside! -:D

Greetings,

Blag.
SAP Labs Network.

sΓ‘bado, 15 de diciembre de 2018

Hey Vector, who do I look like?


I have played with Cozmo in the past, so when Vector came out...I knew I needed to do something with it ;)

So...what’s Vector?


Pretty much a black Cozmo? Well...yes and no :) Vector has a better processor, 4 cores, a microphone, almost double amount of parts, a better camera and colorful display.

As you know...I’m a really big fan of SAP Leonardo Machine Learning APIs...as they allow you to easily consume Machine Learning services.

For this blog I wanted to do something that I have always liked...take a picture of someone and then compare it with photos of famous actors and actresses and see who this person resembles the most ;)

So, let’s start :D

Installing the Vector SDK

Make sure that Vector is connected to the Internet by using Vectors app on IPhone or Android. Here’s a nice video on how to do that.

Once your Vector is connected to the Internet...make sure to simply kill the Vector's app on your phone.

The Vector SDK was only available to the people who back Anki on their Kickstarter campaign.
...but since November 11th, the SDK is on Public Alpha! :D Which means...you can finally get your hands on it ;)

If by any chance you got the SDK installed before...remove it before moving forward…

python3 -m pip uninstall anki_vector

Then simply install it by doing this…

python3 -m pip install --user anki_vector

Then, you need to authenticate your Vector…

python3 -m anki_vector.configure

You will be asked for Vector’s name, ip address and serial number. Also you will be requested for your Anki Cloud Credentials.

To get this information simply put Vector on his charger...and press his top twice. This will give you his name, then lift up and down his handle in order to the IP. The serial number is on Vector’s bottom.

The Learning Phase


First things first...we need a bunch of pictures from famous people...for that I relied on The Movie DB website...


I went and download almost randomly 100 images of both men and woman. I didn’t went into each person but rather saved the “thumbnails”.

Now, there’s an SAP Leonardo API called “Inference Service for Face Feature Extraction” which basically grabs and image, determine if there’s a face or not and then extracts its features...like the color of the eyes, form on the mouth, hair and so on...and that information is returned in a nice although pretty much impossible to decipher Vector of Features. I mean...they look just like numbers...and they can mean anything :P

Anyway...I created a folder called “People” and drop all the 100 images. So, next step is of course get all the features for all the images...and manually its obviously not only hard but pointless...it’s way better to optimize the process ;)

One programming language that I grown to love is Crystal...Fast as C, slick as Ruby, yep...pretty much a better way of doing Ruby :)

Installation is pretty easy and you can find instructions here but I’m using Ubuntu on VMWare, so here are the instruction for it…

On a terminal window copy and paste this…

curl -sSL https://dist.crystal-lang.org/apt/setup.sh | sudo bash

Then simply do this…

sudo apt-get update

sudo apt install crystal

Installation of the following modules is optional but recommended…

sudo apt install libssl-dev      # for using OpenSSL
sudo apt install libxml2-dev     # for using XML
sudo apt install libyaml-dev     # for using YAML
sudo apt install libgmp-dev      # for using Big numbers
sudo apt install libreadline-dev # for using Readline

Once we’re done...it’s time to write the application…first create a folder called “Features”.

Call your script PeopleGenerator.cr and copy and paste the following code…


PeopleGenerator.cr
require "http"
require "json"

class FaceFeature
  JSON.mapping({
    face_feature: Array(Float64)
  })
end

class Predictions
  JSON.mapping({
 faces: Array(FaceFeature)
  })
end

class Person
  JSON.mapping({
 id: String,
 predictions: Array(Predictions)
  })
end

folder = Dir.new("#{__DIR__}/People")
while photo = folder.read
  if photo != "." && photo != ".." && photo != "Features"
 io = IO::Memory.new
 builder = HTTP::FormData::Builder.new(io)

 File.open("#{__DIR__}/People/" + photo) do |file|
  metadata = HTTP::FormData::FileMetadata.new(filename: photo)
  headers = HTTP::Headers{"Content-Type" => "image/jpg"}
  builder.file("files", file, metadata, headers)
 end
 builder.finish

 headers = HTTP::Headers{"Content-Type" => builder.content_type, 
                                "APIKey" => "YourAPIKey",
                                "Accept" => "application/json"}
 response = HTTP::Client.post("https://sandbox.api.sap.com/ml/
                                      facefeatureextraction/
                                      face-feature-extraction", body: io.to_s , 
                                      headers: headers)
 
 feature_name = "#{__DIR__}/Features/" + File.basename(photo, ".jpg") + ".txt"
 
 puts photo 
 
 File.write(feature_name, Person.from_json(response.body).predictions[0].
                   faces[0].face_feature)
 sleep  2.second
  end
end

command = "zip -r -j features.zip #{__DIR__}/Features"
Process.run("sh", {"-c", command})

puts "Done."

Let’s explain the code before we check the results…

require "http"
require "json"

We need these two libraries to be able to call the SAP Leonardo API and also to be able to read and extract the results…

class FaceFeature
  JSON.mapping({
    face_feature: Array(Float64)
  })
end

class Predictions
  JSON.mapping({
faces: Array(FaceFeature)
  })
end

class Person
  JSON.mapping({
id: String,
predictions: Array(Predictions)
  })
end

This is the JSON mapping that we need to use to extract the information coming back from the API.

folder = Dir.new("#{__DIR__}/People")
while photo = folder.read
  if photo != "." && photo != ".." && photo != "Features"
io = IO::Memory.new
builder = HTTP::FormData::Builder.new(io)

File.open("#{__DIR__}/People/" + photo) do |file|
metadata = HTTP::FormData::FileMetadata.new(filename: photo)
headers = HTTP::Headers{"Content-Type" => "image/jpg"}
builder.file("files", file, metadata, headers)
end
builder.finish

headers = HTTP::Headers{"Content-Type" => builder.content_type, "APIKey" => "YourAPIKey,"Accept" => "application/json"}
response = HTTP::Client.post("https://sandbox.api.sap.com/ml/facefeatureextraction/face-feature-extraction", body: io.to_s , headers: headers)

feature_name = "#{__DIR__}/Features/" + File.basename(photo, ".jpg") + ".txt"

puts photo

File.write(feature_name, Person.from_json(response.body).predictions[0].faces[0].face_feature)
sleep  2.second
  end
end

command = "zip -r -j features.zip #{__DIR__}"
Process.run("sh", {"-c", command})

puts "Done."

This section is larger, first we specify the folder from the images will be read. Then for each image we will if it’s a picture or a folder structure...of course we want images only…

Then, we create a FormData builder in order to avoid having to base64 encode the images...put them in JSON payload and so on...this way is easier and native…

We open each image and feed the FormData metadata and headers.

Also, we need to pass the extra “headers” required by SAP Leonardo.

Once that is done, we can simply call the REST API, and then we create a “Feature Name” which is going to be the name of the generated file...basically the image name with an “.txt” extension.

For each file we’re going to extract the feature vector from the JSON response, write the file and give 2 seconds delay just to not overflow the API call…

Once that’s done, we simply call a “zip” command from the terminal and zip it…


Now, the zip file will contain a 100 files...each with the features of all the images that we have on our “People” folder.

Simply as that...we have trained our application ;)

The Testing and Execution Phase


I know that usually you test your model first...but for this once...we can do both at the same time ;)

We’re going to create a Python script that will deal with taking our picture...call the Features API on that image and then call another API to determine who we do look like…

Let’s create a script called GuessWho.py


GuessWho.py
import anki_vector
import threading
import requests
import os
import json
import time
import subprocess
import re
import math
from PIL import Image
from anki_vector.events import Events
from anki_vector.util import degrees

event_done = False
said_text = False
new_width  = 184
new_height = 96

def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial, enable_face_detection=True, 
                           enable_camera_feed=True) as robot:
        evt = threading.Event()

        def on_robot_observed_face(event_type, event):

            global said_text
            if not said_text
                said_text = True
                robot.say_text("Taking Picture!")
                image = robot.camera.latest_image
                image.save("Temp.png")
                robot.say_text("Picture Taken!")
                evt.set()

        robot.behavior.set_head_angle(degrees(45.0))
        robot.behavior.set_lift_height(0.0)

        robot.events.subscribe(on_robot_observed_face, Events.robot_observed_face)

        try:
            if not evt.wait(timeout=10):
                print("---------------------------------")
        except KeyboardInterrupt:
            pass

def guess_who():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot: 
        url = "https://sandbox.api.sap.com/ml/facefeatureextraction/
               face-feature-extraction"
                
        img_path = "Temp.png"
        files = {'files': open (img_path, 'rb')}

        headers = {
            'APIKey': "YourAPIKey",
            'Accept': "application/json",
        }
    
        response = requests.post(url, files=files, headers=headers)
  
        robot.say_text("I'm processing your picture!")
    
        json_response = json.loads(response.text)
        json_text = json_response['predictions'][0]['faces'][0]['face_feature']
    
        f = open("myfile.txt", "w")
        f.write(str(json_text))
        f.close()
    
        time.sleep(1)
    
        p = subprocess.Popen('zip -u features.zip myfile.txt', shell=True)
    
        time.sleep(1)
    
        url = "https://sandbox.api.sap.com/ml/similarityscoring/similarity-scoring"
    
        files = {'files': ("features.zip", open ("features.zip", 'rb'), 
                 'application/zip')}
        params = {'options': '{"numSimilarVectors":100}'}
    
        response = requests.post(url, data=params, files=files, headers=headers)
        json_response = json.loads(response.text)

        robot.say_text("I'm comparing your picture with one hundred other pictures!")

        for x in range(len(json_response['predictions'])):
            if json_response['predictions'][x]['id'] == "myfile.txt":
                name, _ = os.path.splitext(json_response['predictions'][x]
                          ['similarVectors'][0]['id']) 
                name = re.findall('[A-Z][^A-Z]*', name)
                full_name = " ".join(name)
                pic_name = "People/" + "".join(name) + ".jpg"
                avg = json_response['predictions'][x]['similarVectors'][0]['score']
                robot.say_text("You look like " + full_name + 
                               " with a confidence of " + 
                                str(math.floor(avg * 100)) + " percent")
                image_file = Image.open(pic_name)
                image_file = image_file.resize((new_width, new_height), 
                                                Image.ANTIALIAS)  
                screen_data = anki_vector.screen.convert_image_to_screen_data(
                                                                   image_file)
                robot.behavior.set_head_angle(degrees(45.0))
                robot.conn.release_control()
                time.sleep(1)
                robot.conn.request_control()                
                robot.screen.set_screen_with_image_data(screen_data, 0.0)
                robot.screen.set_screen_with_image_data(screen_data, 25.0)
                
                print(full_name)
                print(str(math.floor(avg * 100)) + " percent")

                time.sleep(5)

if __name__ == '__main__':
    main()
    guess_who()

This script is bigger...so let’s make sure we understand everything that is going on…

import anki_vector
import threading
import requests
import os
import json
import time
import subprocess
import re
import math
from PIL import Image
from anki_vector.events import Events
from anki_vector.util import degrees

That’s a lot of libraries :) The first one is pretty obvious...is how we can connect to Vector ;)

The second one is to handle “threads” as we need to do a couple of things asynchronously.

The third one is to handle the call to the APIs.

The fourth one is to handle folder access.

The fifth one is to handle the JSON response coming back from the API.

The sixth one is so that we can have a delay in the execution of the application.

The seventh is to be able to call terminal commands.

The eight one is to use Regular Expressions.

The ninth one is to handle math operations.

The tenth one is to handle image operations.

The eleventh is to handle events as we want Vector to try to detect our face.

The last one is to be able to move Vectors face.

def main():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial, enable_face_detection=True, enable_camera_feed=True) as robot:
        evt = threading.Event()

        def on_robot_observed_face(event_type, event):

            global said_text
            if not said_text:
                said_text = True
                robot.say_text("Taking Picture!")
                image = robot.camera.latest_image
                image.save("Temp.png")
                robot.say_text("Picture Taken!")
                evt.set()

        robot.behavior.set_head_angle(degrees(45.0))
        robot.behavior.set_lift_height(0.0)

        robot.events.subscribe(on_robot_observed_face, Events.robot_observed_face)

        try:
            if not evt.wait(timeout=5):
                print("---------------------------------")
        except KeyboardInterrupt:
            pass

This one is for sure...our main event :) Here we’re going to open a connection with Vector, and as we can have multiple Vectors...we need to grab the serial number to specify which one we want to use...also we need to activate both face detection and camera feed.

We’re going to start a thread as we need to call an event where Vector tries to detect our face. If he can see us, then he will say “Taking Picture!”...grab the image, save it and then say “Picture Taken!”. After that the event is done...but...while this is happening we can move his head and drop down his handle so that he can see us better.

As you can see we’re subscribed to two events, one to observe our face and the other when our face is there and visible…

def guess_who():
    args = anki_vector.util.parse_command_args()
    with anki_vector.Robot(args.serial) as robot:
        url = "https://sandbox.api.sap.com/ml/facefeatureextraction/face-feature-extraction"
                
        img_path = "Temp.png"
        files = {'files': open (img_path, 'rb')}

        headers = {
            'APIKey': "YourAPIKey",
            'Accept': "application/json",
        }

        response = requests.post(url, files=files, headers=headers)

        robot.say_text("I'm processing your picture!")

        json_response = json.loads(response.text)
        json_text = json_response['predictions'][0]['faces'][0]['face_feature']

        f = open("myfile.txt", "w")
        f.write(str(json_text))
        f.close()

        time.sleep(1)

        p = subprocess.Popen('zip -u features.zip myfile.txt', shell=True)

        time.sleep(1)

        url = "https://sandbox.api.sap.com/ml/similarityscoring/similarity-scoring"

        files = {'files': ("features.zip", open ("features.zip", 'rb'), 'application/zip')}
        params = {'options': '{"numSimilarVectors":100}'}

        response = requests.post(url, data=params, files=files, headers=headers)
        json_response = json.loads(response.text)

        robot.say_text("I'm comparing your picture with one hundred other pictures!")

        for x in range(len(json_response['predictions'])):
            if json_response['predictions'][x]['id'] == "myfile.txt":
                name, _ = os.path.splitext(json_response['predictions'][x]['similarVectors'][0]['id']) 
                name = re.findall('[A-Z][^A-Z]*', name)
                full_name = " ".join(name)
                pic_name = "People/" + "".join(name) + ".jpg"
                avg = json_response['predictions'][x]['similarVectors'][0]['score']
                robot.say_text("You look like " + full_name + " with a confidence of " + str(math.floor(avg * 100)) + " percent")
                image_file = Image.open(pic_name)
                image_file = image_file.resize((new_width, new_height), Image.ANTIALIAS)  
                screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)
                robot.behavior.set_head_angle(degrees(45.0))
                robot.conn.release_control()
                time.sleep(1)
                robot.conn.request_control()                
                robot.screen.set_screen_with_image_data(screen_data, 0.0)
                robot.screen.set_screen_with_image_data(screen_data, 25.0)
                
                print(full_name)
                print(str(math.floor(avg * 100)) + " percent")

                time.sleep(5)

This method will handle to rough parts of our application…

We connect to Vector once again...although this time we don’t need to activate anything as the picture has been already taken.

We pass the URL for the features API.

Then we open our “Temp.png” file which is the image that Vector took from us.

We need to pass the extra header for the SAP Leonardo API.

We call the API and get the JSON response.

Again, we need to extract the feature information from the JSON response. This time however we’re going to create a single file called “myfile.txt”. We going to make the application sleep for a second and then call a terminal process to add “myfile.txt” to our Features.zip file…

Then we sleep again for another second...and this is just not to overflow the API calls…

Here, we’re going to call a different API which is called Inference Service for Similarity Scoring

This API will read all the 101 features files and determine the cosine distance (-1 to 1) from each file compared to one another. This way it can determine which files are closer to each other and hence to whom do we resemble the most...providing us with a percentage of confidence.

This call is a little bit more complicated that the previous one as we need to upload the zip file…

        files = {'files': ("features.zip", open ("features.zip", 'rb'), 'application/zip')}
        params = {'options': '{"numSimilarVectors":100}'}

        response = requests.post(url, data=params, files=files, headers=headers)
        json_response = json.loads(response.text)

Take into account that while we have 101 files...we need to compare 1 file against 100 others...so we pass 100 as the “numSimilarVectors”.

Once we done that, we need to read each section from the JSON response until we find the id that have the value of “myfile.txt”. Once we have that, we use a Regular Expression to extract only the name without the extension. Also, we need to have the name of the image...so in the end we need to have something like this…

full_name = “Nicolas Cage”
pic_name = “People/NicolasCage.jpg”

We need to extract the percentage of confidence as well…

avg = json_response['predictions'][x]['similarVectors'][0]['score'] 

So, we can have Vector saying “You look like Nicolas Cage with a confidence of 75 percent”.

Now...here’s come the fun part ;) We already know how do we look like...but let’s say...we don’t really remember how Nicolas Cage looks like...so let’s take advantage of Vector’s fancy screen and display it there ;) By the way...we need to release control, gain it back and display the image for zero seconds and then re-display it...this is mainly because Vector’s eyes keep blocking the image on the screen...and this a way to prevent that behavior ;)

           image_file = Image.open(pic_name)
                image_file = image_file.resize((new_width, new_height), Image.ANTIALIAS)  
                screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)
                robot.behavior.set_head_angle(degrees(45.0))
                robot.conn.release_control()
                time.sleep(1)
                robot.conn.request_control()                
                robot.screen.set_screen_with_image_data(screen_data, 0.0)
                robot.screen.set_screen_with_image_data(screen_data, 25.0)

First we open the image, then we resize it so it fits on the screen, then we convert it to Vector’s format and finally we display it on the screen, specifying for how long we want it there…

                print(full_name)
                print(str(math.floor(avg * 100)) + " percent")

                time.sleep(5)

We print some information on the screen and then sent it to sleep for 5 seconds so the image doesn’t disappear too quickly ;)

Finally! The most important part of the whole script...calling the functions :P

if __name__ == '__main__':
    main()
    guess_who()

And that’s pretty much it :) We open a terminal window and type…

python3 GuessWho.py

Vector is going to try to look at us and detect our face...he will take a picture...SAP Leonardo APIs are going to be called...we will listened and see who do we look like ;)

Hope you enjoyed this blog...I obviously did :D

And just to wrap up things...here’s a small video…


BTW...this is the picture that Vector took of me...


Greetings,

Blag.
SAP Labs Network.

miΓ©rcoles, 19 de septiembre de 2018

SAP Leonardo Machine Learning API’s on the Go


Working for the d-shop, first in the Silicon Valley and now in Toronto, allows me to use my creativity and grab any new gadget that hits the market.

This time, it was Oculus Go’s turn πŸ˜‰ and what’s the Oculus Go? Well, it is an Standalone VR headset, which basically means…no tangled cables πŸ˜‰

For this project I had the chance to work with either Unity or Unreal Engine…I had used Unity many times to develop Oculus Rift and Microsoft HoloLens applications…so I thought Unreal Engine would be a better choice this time…although I have never used it in a big project before…specially because nothing beats Unreal when it comes to graphics…

With Unreal chosen…I needed to make another decision…C++ or Blueprints…well…while I have used C++ in the past for a couple of Cinder applications…Blueprints looked better as I wanted to develop faster and without too many complications…and well…that’s half of the truth…sometimes Blueprints can become really messy 😊

Just so you know, I used Unreal Engine 4.20.2 and created a Blueprints application.



Since the beginning I knew that I wanted to use SAP Leonardo Machine Learning API’s…as I used them before for my blog “Cozmo, read to me”  where I used a Cozmo Robot, OpenCV and SAP Leonardo’s OCR API to read a whiteboard with a handwritten message and have Cozmo read it out loud.

The idea

This time, I wanted to showcase more than just one API…so I needed to choose which ones…gladly that wasn’t really hard…most API are more “Enterprise” oriented…so that left me with “Image Classification, OCR and Language Translation” …

With all decided…I still needed to figure out how to use those API’s…I mean…Oculus Go is Virtual Reality…so no chance of looking at something, taking a picture and send it to the API…

So, I thought…why don’t I use Blender (which is an Open-Source 3D computer graphics software toolset) and make some models…then I can render those models…take a picture and send it to the API…and having models means…I could turn them into “.fbx” files and load them into Unreal for a nicer experience…

With the OCR and Language Translation API’s…it was different…as I needed images with text…so I decided to use InkScape (which is an Open-Source Vector Graphics Editor).

The implementation

When I first started working on the project…I knew I needed to start step by step…so I first did a Windows version of the App…then ported it to Android (Which was pretty easy BTW) and finally ported it to Oculus Go (Which was kind of painful…)

So, sadly I’m not going to be able to put any source code here…simply because I used Blueprints…and I’m not sure if you would like to reproduce them by hand ☹ You will see what I mean later on this blog…

Anyway…let’s keep going 😊

When I thought about this project, the first thing that came into my mind was…I want to have a d-shop room…with some desks…a sign for each API…some lights would be nice as well…



So, doesn’t look that bad, huh?

Next, I wanted to work on the “Image Classification” API…so I wanted to be fairly similar…but with only one desk in the middle…which later turned into a pedestal…with the 3D objects rotating on top of it…the it should be a space ready to show the results back from the API…also…arrows to let the user change the 3D model…and a house icon to allow the user to go back to the “Showfloor”…




You will notice two things right away…first…what does that ball supposed to be? Well…that’s just a placeholder that will be replaced by the 3D Models 😊 Also…you can see a black poster that says “SAP Leonardo Output”…that’s hidden and only become available when we launch the application…

For the “Optical Character Recognition” and “Language Translation” scenes…it’s pretty much the same although the last one doesn't have arrows 😊





The problems

So that’s pretty much how the scenes are related…but of course…I hit the first issue fast…how to call the API’s using Blueprints? I looked online and most of the plugins are paid ones…but gladly I found a free one that really surprised me…UnrealJSONQuery works like a charm is not that hard to use…but of course…I needed to change a couple of things in the source code (like adding the header for the key and changing the parameter to upload files). Then I simply recompiled it and voila! I got JSON on my application πŸ˜‰

But you want to know what I changed, right? Sure thing 😊 I simply unzip the file and went to JSONQuery --> Source --> JSONQuery --> Private and opened JsonFieldData.cpp

Here I added a new header with (“APIKey”, “MySAPLeonardoAPIKey”) and then I looked for PostRequestWithFile and change the “file” parameter to “files”…

To compile the source code, I simply created a create a new C++ project, then a “plugins” folder in the root folder of my project and put everything from the downloaded folder…open the project…let it compiled and then I re-created everything from my previous project…once that was done…everything started to work perfectly…

So, let’s see part of the Blueprint used to call the API…




Basically, we need to create the JSON, call the API and then read the result and extract the information.

Everything was going fine and dandy…until I realized that I needed to package the 3D images generated by Blender…I had no idea how to do it…so gladly…the Victory Plugin came to the rescue πŸ˜‰ Victory has some nodes that allows you to read many directories from inside the generated application…so I was all set 😊

This is how the Victory plugin looks like when using it in a Blueprint…




The Models

For the 3D Models as I said…I used Blender…I modeled them using “Cycles Render”, baked the materials and then render the image using “Blender Render” to be able to generate the .fbx files…





If the apples look kind of metallic or wax like…blame my poor lighting skills ☹

When loaded into Unreal…the models look really nice…


Now…I know you want to see how a full Blueprint screen looks like…this one is for the 3D Models on the Image Classification scene…


Complicated? Well...kind of…usually Blueprints are like that…but they are pretty powerful…

Here’s another one…this time for the “Right Arrow” which allows us to change models…


Looks weird…but works just fine πŸ˜‰



You may realize that both “Image Classification” and “OCR” both have Right and Left arrows…so I needed to do some reuse of variables and they needed to be shared between Blueprints…so…for that I created a “Game Instance” where I simply create a bunch of public variables that could be then shared and updated.

If you wonder what I used Inkscape for? Well…I wanted to have a kind of Neon Sign image and a handwritten image…



From Android to Oculus Go

You may wonder…why does it changed from Android to the Oculus Go? Aren’t they both Android based? Well…yes…but still…thanks to personal experience…I know that things change a lot…

First…on Android…I created the scenes…and everything was fine…on the Oculus Go…no new scenes were loaded…when I clicked on a sign…the first level loaded itself… ☹ Why? Because I needed to include them in the arrays of scenes to be packaged…

And the funny thing is that the default projects folder for Unreal is “Documents”…so when I tried to add the scene it complained because the path was too long…so I need to clone the project and move it a folder on C:\

Also…when switching from Windows to Android…it was a simple as changing the “Click” to “Touch”…but for Oculus Go…well…I needed to create a “Pawn”…where I put a camera, a motion controller, and a pointer (acting like a laser pointer)…here I switch the “Touch” for a “Motion Controller Thumbstick”…and then from here I needed to control all the navigation details…very tricky…

Another thing that changed completely was the “SAP Leonardo Output”…let’s see how that looked on Android…



Here you can see that I used a “HUD”…so wherever you look…the HUD will go with you…

On the Oculus Go…this didn’t happen at all…first I needed to put a black image as a background…

Then I needed to create an actor and then put the HUD inside…turning it on a 3D HUD…




The final product

When everything was done…I simply packaged my app and load it into the Oculus Go…and by using Vysor I was able to record a simple session so you can see how this looks in real life πŸ˜‰ Of course…the downside (Because first…I’m lazy to keep figuring out things and second because it’s too much hassle) is that you need to run this from the “Unknown Sources” section on the Oculus Go…but…it’s there and working and that’s all that matters πŸ˜‰

Here’s the video so you can fully grasp what this application is all about 😊





I hope you like it πŸ˜‰

Greetings,

Blag.
SAP Labs Network.