lunes, 30 de julio de 2018

Cozmo, read to me

Do you know Cozmo? The friendly robot from Anki? he is...

Cozmo is a programmable robot that has many features...and one of those includes a you can Cozmo take a picture of something...and then do something with that picture...

To code for Cozmo you need to use Python...actually...Python 3 -;)

For this blog, we're going to need a couple of let's install them...

pip3 install ‘cozmo[camera]’

This will install the Cozmo SDK...and you will need to install the Cozmo app in your phone as well...

If you have the SDK installed already, you may want to upgrade it because if you don't have the latest version it might not work...

pip3 install --upgrade cozmo

Now, we need a couple of extra things...

sudo apt-get install python-pygame
pip3 install pillow
pip3 install numpy

pygame is a games framework
pillow is a wrapper around the PIL library and it's used to manage images.
numpy allows us to manage complex numbers in Python.

That was the easy now we need to install OpenCV...which allows to manipulate images and video...

This one is a little bit tricky, so if you get on Google or just drop me a message...

First, make sure that OpenCV is not installed by removing it...unless you are sure it's working properly for you...

sudo apt-get uninstall opencv

Then, install the following prerequisites...

sudo apt-get install build-essential cmake pkg-config yasm python-numpy

sudo apt-get install libjpeg-dev libjpeg8-dev libtiff5-dev libjasper-dev 

sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev 
libv4l-dev libdc1394-22-dev

sudo apt-get install libxvidcore-dev libx264-dev libxine-dev libfaac-dev

sudo apt-get install libgtk-3-dev libtbb-dev libqt4-dev libmp3lame-dev

sudo apt-get install libatlas-base-dev gfortran

sudo apt-get install libopencore-amrnb-dev libopencore-amrwb-dev 
libtheora-dev libxvidcore-dev x264 v4l-utils

If by any chance, something is not available on your system, simply remove it from the list and try again...unless you're like me and want to spend hours trying to get everything...

Now, we need to download the OpenCV source code so we can build it...from the source...

unzip //This should produce the folder opencv-3.4.0

Then, we need to download the contributions because there are some things not bundled in OpenCV by default...and you might need them for any other project...

//This should produce the folder opencv_contrib-3.4.0

As we have both folders, we can start compiling...

cd opencv-3.4.0
mkdir build
cd build
-D CMAKE_CXX_COMPILER=/usr/bin/g++ 
-D OPENCV_EXTRA_MODULES_PATH=/YourPath/opencv_contrib-3.4.0/modules 
-D PYTHON_EXECUTABLE=/usr/bin/python3.6 

Keep extra attention that you need to pass the correct path to your opencv_contrib it's better to pass the full path to avoid making errors...

And yes...that's a pretty long command for a build...and it took me a long time to make it you need to figure out all the parameters...

Once we're done, we need to make cmake will prepare the recipe...

make -j2

If there's any mistake, simply do this...

make clean

Then, we can finally install OpenCV by doing this...

sudo make install
sudo ldconfig

To test that it's working properly...simply do this...

>>>import cv2

If you don't have any errors...then we're good to go -;)

That was quite a lot of work...anyway...we need an extra tool to make sure our image get nicely processed...

Download textcleaner and put in the same folder as your Python script...

And...just in case you're wondering...yes...we're going to have Cozmo take a picture...we're going to process it...use SAP Leonardo's OCR API and then have Cozmo read it back to, huh?
SAP Leonardo's OCR API is still on version 2Alpha1...but regardless of works amazing well -;)

Although keep in mind that if the result is not always pretty accurate that because of the lighting, the position of the image, your handwritting and the fact that the OCR API is still in Alpha... first things first...we need a white board...

And hand writing is far from being good... -:(

Now, let's jump into the source code...
import cozmo
from cozmo.util import degrees
import PIL
import cv2
import numpy as np
import os
import requests
import json
import re
import time
import pygame
import _thread

def input_thread(L):

def process_image(image_name):
 image = cv2.imread(image_name)
 img = cv2.resize(image, (600, 600))
 img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
 blur = cv2.GaussianBlur(img, (5, 5), 0)
 denoise = cv2.fastNlMeansDenoising(blur)
 thresh = cv2.adaptiveThreshold(denoise, 255, 
                 cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 11, 2)
 blur1 = cv2.GaussianBlur(thresh, (5, 5), 0)
 dst = cv2.GaussianBlur(blur1, (5, 5), 0)
 cv2.imwrite('imggray.png', dst)
 cmd = './textcleaner -g -e normalize -o 12 -t 5 -u imggray.png out.png'

def ocr():
 url = ""
 img_path = "out.png"
 files = {'files': open (img_path, 'rb')}
 headers = {
     'APIKey': "APIKey",
     'Accept': "application/json",
 response =, files=files, headers=headers)
 json_response = json.loads(response.text)
 json_text = json_response['predictions'][0]
 json_text = re.sub('\n',' ',json_text)
 json_text = re.sub('3','z',json_text)
 json_text = re.sub('0|O','o',json_text) 
 return json_text

def cozmo_program(robot: cozmo.robot.Robot): = False
 L = []
 _thread.start_new_thread(input_thread, (L,))
 while True:
  if L:
   filename = "Message" + ".png"
   pic_filename = filename
   latest_image =
   robot.say_text("Picture taken!").wait_for_completed()
   message = ocr()
   robot.say_text(message, use_cozmo_voice=True, 

cozmo.run_program(cozmo_program, use_viewer=True, force_viewer_on_top=True)

Let's analyze the code a little bit...

We're going to use threads, as we need to have a window where we can see what Cozmo is looking at and another with Pygame where we can press "Enter" as command to have Cozmo taking a picture.

Basically, when we run the application, Cozmo will move his head and get into picture mode...then, if we press "Enter" (On the terminal screen) it will take a picture and then send it to our OpenCV processing function.

This function will simply grab the image, scale it, make it grayscale, do a GaussianBlur to blur the image and remove the noise and reduce detail. Then we're going to apply a denoising to get rid of dust and fireflies...apply a threshold to separate the white and black pixels, and apply a couple more blurs...

Finally we're to call textcleaner to further remove noise and make the image cleaner...

So, here is the original picture taken by Cozmo...

This is the picture after our OpenCV post-processing...

And finally, this is our image after using textcleaner...

Finally, once we have the image the way we wanted, we can call the OCR API which is pretty straightforward...

To get the API Key, simply go to and log in...

Once we have the response back from the API, we can do some Regular Expressions cleanup just to make sure some characters doesn't get wrongly recognized...

Finally, we can have Cozmo to read the message out loud -;) And just for demonstration purposes...

Here, I was lucky enough that the lighting and everything was perfectly it was a pretty clean response...further tests were pretty bad -:( But's important to have good lighting...

Of wan to see a video of the process in action, right? Well...funny first try was perfect! Even better than this one...but I didn't shoot the video -:( Further tries were pretty crappy until I could get something acceptable...and this is what you're going to watch now...the sun coming through the window didn't helped me...but it's pretty good anyway...

Hope you liked this blog -:)


SAP Labs Network.

lunes, 21 de mayo de 2018

The Blagchain

Lately, I have been learning about Blockchain and Ethereum. Two really nice and interesting topics...but as they say...the best way to learn is by I put myself on working on the Blagchain.

So, what's the Blagchain? Basically, it's a small Blockchain application that picks some things from Blockchain and some things from Ethereum and it was build as an educational the Blagchain you can get a user, post a product or buy it and everything will be stored in a chain like structure...

Before we jump into the screenshots...let me tell you about the technology I chose for this little project...

There are many technologies out choosing the right one is always a hard thing...half the way you can realize that nope...that was not the smartest decision...some other language can do a better job in less time...or maybe that particular feature is not available and you didn't knew it because you never need it before...

When I started learning about Blockchain and Ethereum...I knew I wanted to build the Blagchain using a web the first languages that came into my mind were out of the question...basically because they don't provide web interfaces or simply because it would be too painful to build the app using them...also I wanted a language with few dependencies and with easy installation and extension...I wanted an easy but fast language...and then...almost instantly I knew which one I had to use...

Crystal is similar to Ruby but faster...and nicer -;) has Kemal a Sinatra-like web framework...

When I discovered Crystal I was really impressed by how well it is designed...specially's still on Alpha! How can such a young language can be so good? Beats me...but Crystal is really impressive...

Anyway...let's see how the Blagchain works...

For's not a dapp...but that's fine because you only use it uses two web applications that run on different working as the server and the other working as the client...

You can add a new product...

You can see here that we have our Genesis Block, a new block for the posting of a product (And they are connected via the Previous Hash) and also you can see that any transaction will cost us 0.1 Blagcoin...

Now, we can use another browser to create a new user...

As this user didn't create the product...he/her can buy it...and add a new transaction to the chain...

Money (Blagcoin) goes from one account to the other. The chain grows and everything is recorded...

What if you don't have enough Blagcoin to buy something?

Now...if you like this kind of things...this is how many lines of codes it took me... (Server part) --> 129 lines (Client part) --> 125 lines
index.ecr (HTML, Bootstrap and JQuery) --> 219 lines

So not even 500 lines of codes for the whole application...that's pretty cool, huh? -;)

And yes...I know you want to see a little bit of the source code, right? Well...why not -:)
post "/sellArticle" do |env|
  user = env.params.body["user"]
  article = env.params.body["article"]
  description = env.params.body["description"]
  price = env.params.body["price"]
  amount = (env.session.float("amount") - 0.1).round(2)
  env.session.float("amount", amount)"http://localhost:3000/addTransaction", form: "user=" + user + 
                    "&article=" + article + "&description=" + description + "&price=" + price)
  env.session.bool("flag", true)
  env.redirect "/Blagchain"


SAP Labs Network.

miércoles, 17 de enero de 2018

Wooden Puzzle - My first Amazon Sumerian Game

If you read my previous blog Amazon Sumerian - First impressions you will know that I wouldn't stop there -;)

I have been able to play a lot with Sumerian and most learn a lot...the tutorials are pretty good so you should read them even if you don't have access to Sumerian yet...

Once thing that I always wanted to do...was to animate my Cozmo model...that I did on Blender...

I tried doing it on Blender (rigging it and doing the animation but it was getting weird as it worked fine on Blender but not on I know why...but at the time I got frustrated) but instead I thought on doing it on Sumerian using its tools...

I gotta first it didn't worked...but then I kept exploring and realized that the Timeline was my friend...and after many testings...I got it working -;)

Here is how it looks like...

So just go to Cozmo and click on the robot to start the animation and then click on him again to restart the animation...

Simple but really cool -:)

After that...I start thinking about doing something else...something more interesting and this time involving some programming...which is actually JavaScript and not NodeJS like I though initially -:(

Anyway...I tried to do that once in Unity and also in Flare3D, but didn't had enough luck...although fair that time I didn't knew I put myself into working on it...

I designed a Wooden Puzzle board using Blender and then imported into Sumerian and applied a Mesh Collision to it...that way...the ball can run around the board and fall down if it gets over a hole...

Here is how it looks like...

To play...simple use the cursor keys to move the board and guide the ball from "Start" to "Finish". Pressing "r" restarts the game.

Here's the link to play it "Wooden Puzzle"...

Was it hard to build? Not really -:) Sumerian is very awesome and pretty powerful...on top of that...the Sumerian team is really nice and they are always more than willing to help...

So Sumerian experience had been nothing but I can see myself doing more and more projects...

Of course...I'm already working on a couple more -;) Specially one involving using Oculus Rift...but that will take more time for I need to do a lot of Blender work..

Have you tried Sumerian? Not yet? Why don't you go ahead and request access?


Development Culture.

viernes, 22 de diciembre de 2017

Amazon Sumerian - First impressions

For those who know me and for those who doesn' I work as a Developer main job is to learn, explore and evangelize new technologies and programming of course...AR/VR had been on my plate for quite some time...

I have played with Unity3D and Unreal Engine...and of course I have developed for the Google Glass, Microsoft HoloLens and Oculus Rift...

When the good folks at Amazon announced Amazon Sumerian you can figure out that I completely thrilled -:D

So yesterday, I finally got accepted into the Beta program, so of course I started to follow a couple of tutorials and get to know the tool -;)

Please be advised that I'm I haven't tried or used everything...I want to go step by step following the tutorials and trying to understand everything in the most positive way...

Have I mentioned that Sumerian runs on your browser? How crazy is that? No installation...just launch up your browser and start building AR/VR experiences...

When you first launch it, you will be presented with the following screen...

Where you can create a new screen or simply use a template.

Sumerian provides many tutorials, and so far I have only made my way through the first 3...

So here's how my TV room looks like...

As you can see...Sumerian is a full blown editor that provides all the tools that you can find on any other many things that I believe are brand new and exciting...

Of course, you can preview your work...

As for the TV Room tutorial...the idea is that below the TV Screen, there's an Amazon Echo, so you can press it to change the videos presented on the screen. For this you need to use a State Machine and also create a script that will manage the different videos. For the scripting you need to use NodeJS...which is really nice as is the language that I mainly use when developing application for Alexa...

This is how my TV Room looks like when playing a video on render mode -:)

Before moving on to learn more about Sumerian...I need to say that the navigation system doesn't seem to be too good by can use the mouse buttons, Tab and Shift...but control keys or AWSD doesn't seem to work like you would expect on Unity3D or Unreal Engine...I have forwarded my question to the Sumerian Team on I will update this post as soon as I get an answer :)

*UPDATE* By following the "Lights and Camera" tutorial I found out that while the default camera doesn't allow fine grain navigation...the FlyCam does it! -:D All good in the hood -;)

Till next time,

Development Culture.

martes, 2 de mayo de 2017

Blender Lego Art for HoloLens

This blog was originally posted on Blender Lego Art for HoloLens.

Who doesn’t love Lego? And if you have used Blender before…who doesn’t love Blender? -:)

Combining both seemed like a great idea, so that’s what I did…Using Blender I create some pretty simple Lego pieces that can be used to build both simple and complicated models.

A single piece is 0.25 by 0.25 and it’s made out of a single vertex. In the image, the colors are just used to give an idea of the different pieces.

The main point is simply to create a new Blender file, append the different pieces and start building.

At first it’s kind of complicate because you need to deal with the X, Y and Z positions…but once you get used to it…it’s becomes a little bit addictive -:)

Now, the name of the blog is Lego Art, right? So…if you look online for Pixel Art instead, you will find a lot of nice images…like this one…

The perfect candidate for Lego construction! By putting the pieces together and simply assign them the right material…we can get this…

And with some more time and dedication…we can get this…

Anyway…with a collection of models…we can think up and bundle them together into a Microsoft HoloLens application -;)

The application itself is easy…you start by looking at all the models on a shelve…you can select one, make it smaller, bigger, turn it left or right or simply go back to the shelve.

Here’s a video showing how it’s look like.


Development Culture.

miércoles, 29 de marzo de 2017

Room with a View – HoloLens, Unity3D and Blender

This post was originally posted on Room with a View – HoloLens, Unity3D and Blender.

Since the last month or so I have been enlighten myself with an awesome Blender course…and while I had some previous experience by reading some books…nothing could get me to the point that I’m at now as this course…so…enough said…it’s more than highly recommended 🙂

So…one of the challenges was to create something just using primitives…that means…cubes, spheres, cones and so on…nothing fancy…no modifiers…no extra knowledge…so I was able to come up with this…

That was a nice start…but I knew it needed something else…beside getting textures for it of course 🙂 It looked pretty plain…so next step was making it nicer…and this came along the way…

Now…it looks fancy, doesn’t it? Everything hand made created in Blender -;)

So…next step was to figure out what to do with this…it looks too nice to just leave it there on my hard drive…so I start thinking and then remembered that the Holographic Academy has a really nice demo called Origami…in this demo you need to select an origami ball that will drop down a paper plane and then hit a fan that will explode and in turn open a crack on the floor where a nice underground world can be seen…truly amazing if you ask me…so…I had an idea 🙂 Why not have a d-shop frame that will explode when selected and then open a hole in the wall where the room that I designed in Blender could be seen…that sound like an impressive demo to me…so that’s exactly what I did 😉

Here’s the video for your viewing pleasure…hope you like it 🙂

At first I though it was going to be a lot of work…but it wasn't really like that…here are some highlights…

* I enclosed the room in a black box with a hole so it be looked through…

* In Unity3D I simply used an Unlit shader…as black is processed as invisible on HoloLens, having the model in a black unlit box gives the same impression as being invisible…hence…it looks like a hole in the wall…

* I used some spatial mapping to be able to pin the frame and the room to the wall…so it didn’t float around but instead looked like the room was inside the wall…

* When I first imported the model from Blender into Unity3D…none of my textures were available…which seemed odd to me…as they were there and were actually assigned…it turned out that I need to choose every single piece of the model and do a smart unwrapping…so the shader knows exactly how to implement the texture…

* I first I tested out with the emulator…which is good…but doesn’t really implement the same measure units that the real device uses…so while it looked fine on the emulator…it looked too far away on the HoloLens…so a lot of testing was needed in order to get it into the right position…

* To shoot the video, at first I tried using Camtasia…but of course…the rendering wasn’t was I was expecting…one thing is to look it from the HoloLens and other thing is looking from the laptop…so instead I used a recording from the Live Streaming of the HoloLens itself…and that was the trick…

As you can imagine…this now brings a whole world of new possibilities and demos…

Now it’s your turn to show us…what can you do 😉


Development Culture.

miércoles, 15 de febrero de 2017

SAP d-shop’s Virtual House – A journey from Physical to Virtual

This post was originally posted on SAP d-shop’s Virtual House – A journey from Physical to Virtual.

Some time ago…our good friends from SAP d-shop Newtown Square (Namely John Astill et all) built a IoT House for SAP Insurance. This little house (hand made by the way) used an Arduino Nano, a bunch of sensors and LED lights…and…which is pretty cool by the way…a 3D Printed washing machine with a water sensor…and of course…it was and it is…IoT enabled.

We thought it was pretty cool…so we have one at our own SAP d-shop at Silicon Valley and it had become a key part in all our d-shop tours.

Then…some time later, our friends from HCP Marketing (Namely Joe Binkley et all) and Intel build a Smart Building. A really nice building…controlled by Amazon Alexa that used an Intel Galileo, some Arduinos as well as servos, lights, a solar panel and even a fan…everything again…IoT enabled…but also as you may have guessed…voice controlled…so you can send the elevator up and down…open or closed the doors and even send the whole building on emergency mode…gladly…we had keep it on the d-shop for quite some time and it’s another of our “wow” factor demos every time someone comes to visit…

Having these two available for us…slowly sparked the fire of innovation and creativity…why don’t we build a Virtual House that can be used on the Oculus Rift and it’s controlled by Alexa?

Not an easy thing…but we for sure love challenges…and thanks to our previous experience Unity3D and Alexa working together we already knew how to start…

The architecture is pretty simple… The Heroku server is just an echo server, so it will repeat everything we pass to it as a JSON response. Our Unity app is constantly checking the Heroku server to see if there’s a message to respond to. Of course, for this to work as intended, we need to setup a skill on Amazon Alexa just to update the server. So, when we say “open door”, then Alexa will send a command to the Heroku server and this server then will produce an “open door” message in JSON. Our Unity app will read the Heroku Server, and act accordingly by opening the door…of course, we don’t want this to happened all over…so after Unity executes the action it sent a null message to the Heroku server, so next time the JSON response is going to be null as well and Unity will simply wait for the next valid command.

If you want to take a sneak peak of how the Virtual House looks like…here are a couple of screenshots…but don’t forget to watch the video, J You will get the full experience -;)

Now…this project started as a “Project in a Box” (for internal only…sorry about that) which means…all the source code and explanations on how to build it from the scratch should be provided…but…for obvious reasons…that didn’t happened L So instead…we turned this into a “Product in a Box” meaning that (Sorry again…internals only) you can download the compiled application and simply edit the configuration file to have it running on your own J No source code is provided by obviously a nice email can get you that -;)

Grab it from here

Now…that I got your full attention…please watch the video J It’s a nice journey from the IoT House to the Virtual House passing by the Smart Building…

Now…you may wonder about the 3D Models used for this Virtual House…as you can see on one of the images, most of them were downloaded but some of them were developed in house J using Blender…Like the Amazon Echo, the 3D Printed Robot and name tags, the Amazon Echo and obviously the house itself J For some other things like the plants and tables…those were imported into Blender and “hand painted” as the textures were not available.

Now…something that we believe it’s pretty important…is to list all the Pain Points and lessons learned while developing this application…

Pain Points and lesson learned:

  • As this is a Product in a Box and not a Project in a Box, we’re not going to include the source code for this application, but what we’re going to do instead is let you know the pain points and lessons learned that came from this project.
  • Unity uses the .NET Framework 3.5, which is already deprecated by .NET 4.0 so many things are not going to work simply because they haven’t been implemented…and why is that? Well…Unity uses Mono (which is .NET for Linux) and I guess they do it to maintain uniformity in all platforms. While Mono remains on .NET 3.5, Unity will not likely upgrade either.
  • When loading scenes, the lighting gets all messed up…so you start in level one…more to level two and suddenly it looks like nighttime…the solution to that is simple…choose “Window à Lighting à Lightmaps”, uncheck the “Auto” checkbox and press “Build” to bake the light again.
  • Coroutines are simply awesome. Normally, you can’t make your application wait or sleep…but by using Coroutines you certainly can…Coroutines are like threads.
  • When using a light, make sure it’s turn off while the character is not in the room, because this will save some graphic processing and because even virtually…we need to be environment aware…
  • Unity doesn’t have a wrap function or property for 3D Text…which is kind of problematic especially if you want to do a Twitter Wall…so your only chance is to build you own…although that’s not that hard…simply grab the incoming text, split it by space into an array…concatenate each word by checking first if the length of the string is lower than our threshold (which should be the maximum number of characters that fit where our 3D text is), is the string is bigger than the threshold, we simply add a carriage return (“\n”) before doing the concatenation.
  • As your application grows you might feel the need to duplicate some assets, which is perfectly fine and doesn’t add too much processing (Especially if you create a Prefab and use that prefab), but don’t forget to assign them unique names, otherwise you’re going to have a headache if you application needs to interact with those assets.
  • Sometimes you will download some 3D models from the web…other times you will create them using Blender…but don’t forget that sometimes just a simple sphere, cube or any other Unity primitive can work just fine by just using an image attached to it as its texture.
  • When creating your Alexa skill…make sure not to make any spelling mistake…otherwise you will hit you head thinking why Alexa is doing what you’re asking her to do…
  • When testing our your application both Debug.log() and Print() will become your best friends…nothing better than a printed value or message to realize what going wrong.
  • When moving an object, always make sure to record its original position and then add the new value to that recorded position. Otherwise, something might provoke the values to go wrong…by having the original values recorded, you avoid having to recalculate the position but just call that variable and get things where they belong.
  • When using 3D Text you will notice that even if you put another object in front of it…it will be always visible…which is not very likely…so we have two options…either create a shader to occlude it…or the easiest one…make the material that it’s in front of it transparent. That’s not perfect for all situations but at least will work.
  • The biggest problems when making Unity and Alexa speak, is that when you ask Alexa to turn on the lights, she will respond “The lights are on” …but then if you ask a second time her response should be “The lights are already on” …to make this…we should need to use a Database or something to store state information…and when closing the application, we would need to clean up the states…while this might be doable…it’s a lot of work, and what happens if the application crashes? Would we need to go and reset the states manually? Not ideal…
  • That leads me to the point of using the elevator…you can open the doors or sent it to any of the floors…for the main part…that’s easy…each floor is a scene, so you need to be on the first floor in order to make the elevator to floor two or three…but…what if you’re outside the elevator? You are on floor one…ask for floor three…and then you open the door…as your characters moves along with the elevator floor…when you open the door everything will look bad…solution? Simply using a cube without a mesh renderer, so it’s invisible…assign a collider with “is trigger” enabled…and validate that the player is colliding with the cube in order to make the elevator move…that way, even you ask for floor three and Alexa confirms that the elevator is going up…nothing will happen…when you open the door…we can assume that the elevator went down or up to your floor in order for you to hop in…just an illusion…but it works…
  • Alexa doesn’t have an option to delay the re-prompt, so when exploring the Virtual House she will ask you “What else can I do for you?” and if we don’t respond the skill will just die…so we will need to wake her up again…that’s kind of sad due to the nature of the application…but nothing to be done unless Amazon releases a way on making the re-prompt to wait longer…
  • As the whole Alexa-Unity3D relays on Heroku…expect some downtimes or responses from Alexa that are not actually replicated in the virtual world…might be an internet connection glitch or just Heroku glitch…

As I mentioned first…the environment gets affected by the weather…if it’s sunny…you will see a sunny clear sky…if it’s rainy you will see a dark and gloomy sky…and this involves using a Skybox…although not your regular Skybox…and what is a Skybox, anyway? Well…simply put…is a cube that covers your whole environment and has different images to simulate the environment…the problems is that the regular Skybox only allows you to assign six sides…which is of course not likely…you need to use a twelve side Skybox…then you can assign sunny image and also cloudy images…that way when checking the weather you can modify the luminosity and that will also affect the Skybox as it will use one or the other giving that nice effect on reflecting the outside weather…

Development Culture.

martes, 14 de febrero de 2017

LED is my new Hello World - Prolog time

As's my LED Numbers app written in took me a long time...a lot of research...a lot of I hope you like it...still a complete Prolog warranties at all -;)

number(0,[[' _  '],['| | '],['|_| ']]).
number(1,[['  '],['| '],['| ']]).
number(2,[[' _  '],[' _| '],['|_  ']]).
number(3,[['_  '],['_| '],['_| ']]).
number(4,[['    '],['|_| '],['  | ']]).
number(5,[[' _  '],['|_  '],[' _| ']]).
number(6,[[' _  '],['|_  '],['|_| ']]).
number(7,[['_   '],[' |  '],[' |  ']]).
number(8,[[' _  '],['|_| '],['|_| ']]).
number(9,[[' _  '],['|_| '],[' _| ']]).

digits(X,[H|T]) :- (X/10 > 0 -> H1 is floor(X/10), H is X mod 10, digits(H1,T)), !.

accRev([H|T],A,R) :- accRev(T,[H|A],R). 

getDigits(L,R) :- digits(L,Y), accRev(Y, [], R).

show_records([A|B]) :-
  print_records(A), nl,

print_records([A|B]) :-

merge([L], L).
merge([H1,H2|T], R) :- maplist(append, H1, H2, H),
    merge([H|T], R), !.

listnum([H1|T1],[R|Y]) :- number(H1,R), listnum(T1,Y).

led(X) :- getDigits(X,Y), listnum(Y,Z), merge(Z,R), show_records(R).

Wanna see it in action? Me too -;)

Back to learning -;)


Development Culture.

My first post on Prolog

As always...I was looking for my next programming language to learn...and somehow...Prolog got in the way...

I had played with Logic Programming in the past by learning Mercury...but really...when it comes to logic...Prolog wins the pot...

Did you guys knew that the first Erlang compiler was built on Prolog? Me neither -:P

For learning...I'm using SWI-Prolog which seems to be the nicer and widely used...and I have to's pretty cool -;) a glance...Prolog reminds me of Mercury of course...but also Forth a little bit...and weirdly to   Haskell in the sense that recursion is a key component...

As happens many times when I'm learning a new programming language...I started off with my Fibonacci numbers here it is...
fibo(NUM,A,B,[H|T]) :- (NUM > 1 -> H is A + B, X is NUM - 1, 
                        (A =:= 0 -> fibo(X,H,B,T); fibo(X,H,A,T))).

fibonacci(NUM,R) :- fibo(NUM,0,1,X), !, append([0,1], X, R).

.pl extension? Yep...the same as Perl...but as you can has anything to do with Perl at all -;)'s the output screen...

My LED Numbers applications is gladly ready and will come after this blog -;)


Development Culture.

lunes, 5 de diciembre de 2016

LED is my new Hello World - Rust time

As I'm currently learning Rust, I need to publish my LED app again -;)

Please take in mind...that..."I'm learning Rust" my code might be buggy, long and not idiomatic...but...enough to showcase the language and allow me to learn more -;)

Here's the code...
use std::io;
use std::collections::HashMap;

fn main(){
 let mut leds:HashMap<&str, &str> = HashMap::new();

 leds.insert("0", " _  ,| | ,|_| ");
 leds.insert("1", "  ,| ,| ");
 leds.insert("2", " _  , _| ,|_  ");
 leds.insert("3", "_  ,_| ,_| ");
 leds.insert("4", "    ,|_| ,  | "); 
 leds.insert("5", " _  ,|_  , _| ");
 leds.insert("6", " _  ,|_  ,|_| ");
 leds.insert("7", "_   , |  , |  ");
 leds.insert("8", " _  ,|_| ,|_| ");
 leds.insert("9", " _  ,|_| , _| ");

 println!("Enter a number : ");
 let mut input_text = String::new();
 io::stdin().read_line(&mut input_text)
            .expect("failed to read");

 let split = input_text.split("");
 let vec: Vec<&str> = split.collect();
 let count = &vec.len() - 2;
 for i in 0..3{
  for j in 0..count{
   match leds.get(&vec[j]){
    Some(led_line) => { 
     let line = led_line.split(",");
     let vec_line: Vec<&str> = line.collect();
    None => println!("")

And here's the result...

Hope you like and if you can point me in more Rusty way of doing it...please let me know -:D


Development Culture.