White Horse Website built with Angular 2 and Node.js


For my very first freelance job, I was commissioned to develop a basic eCommerce website for a local business. This website was to display basic business information (like store location and hours) as well as retrieve inventory information from the store’s POS systems and display the products to users. Furthermore, the website must be fully responsive for mobile users and tested on all major web browsers.

We utilized Node.js/Express.js to act as a proxy between our website and the Bindo POS Api that the business used. This proxy also handled automated email delivery to the store for customer inquires using MailGun.

Our front-end web app was built with Angular 2 and Angular Router to create a seamless website with very small loading times and few calls to the webserver. Bootstrap 3 and Angular Bootstrap were incorporated as the CSS framework to provide consistent styling and structure throughout the user experience.

The website is currently hosted on Heroku and can be visited here:



CloudBase Website (Fullstack built with Sails.js, MongoDB, and Angular)


My next foray into web development included full stack development for a self contained web application. CloudBase is designed to be a vape juice recipe repository, that brings a user friendly web app for users to create and display community made recipes. This project is logically split into the front end and the back end.


Sails.js was the chosen technology to manage all parts of the API backend. I used Sails.js to setup the API routes and handlers. It also acts as an ORM to interface with my MongoDB database where all recipes, flavors, and user data is stored. Sails.js also performs authentication for users to access their profiles on the webapp.


I used Angular for my frontend MVC web app framework. Using angular router, we are able to serve users graphically friendly views of all the recipes stored in CloudBase database as well as create their own. Angular Material was chosen as the CSS framework, because of the rich web components it delivers with seamless Angular integration.


The entire project is hosted with Amazon Web Services in the cloud along with all the user uploaded images being stored in Amazon S3 buckets.

All related source code can be found in my Github repo:


We Love Leeks Website built with HTML/CSS/JQuery


As part of web development learning, I built a basic one page website devoted to cooking with leeks. This website is built using JQuery and the Materialize CSS framework. In addition to basic leek knowledge; Edamam’s API is called to supply all kinds of leek related recipes.

Although the concept is silly, this basic website allowed me to learn the basics of AJAX calls with Javascript, CSS formatting, and good design principals. The site is currently hosted with Apigee and can be visited at the following URL:


Source code for this project can be found on Github:


OpenPointMesh: Combining ordered datasets of spatial data from depth sensors


As part of my year long senior design project, I worked on a team of 5 engineers to create a standalone product through a full development cycle. Our client at NASA’s jet propulsion laboratory currently creates 3D scans of volcanic fissures and vents using a Structure infrared depth sensor. She then manually stitches the resulting point clouds into a single cohesive model and applies a surface to it. Her process takes weeks to form just a single model of a fissure and our job was to automate this task.

Our client requested that her tedious task be automated and made easily accessible for her and other scientists through a user friendly GUI. We were able to utilize Point Cloud Libraries, OpenNI libraries, and Qt GUI libraries to develop our product. We also got to explore the GTest libraries for unit testing our application. For more info, you can watch our product presentation video.


Our source code can also be found here:Β https://github.com/ProjectIRoniC/OpenPointMesh

Exploring Embedded Systems through building an Autonomous Drone


To learn the basics of embedded systems, I am currently working on a team of five undergraduates programming a Parrot AR Drone 2.0. The drone itself is not inherently programmable and has very limited resources. Therefore, my team is interfacing the drone with a controlling computer, allowing us to retrieve data in real-time, process it, and instruct the drone accordingly. The ultimate goal is to provide visual commands to the drone’s onboard camera to properly control its actions. We must develop several hardware/software components to make this project successful including:

  1. A mounted Arduino microprocessor that gathers ultrasonic distance data of the drone’s surrounding environment. This data is then transferred to a secondary computer via Bluetooth communication.
  2. A software component that must filter and interpret the ultrasonic data to give meaningful proprioception information to the drone.
  3. A Node.js program that interfaces with drone over Wifi using the TCP/IP networking and the HTTP protocol for sending messages. This loop continually gives commands to the drone and retrieves the camera feed.
  4. A computer vision component that uses OpenCV to match predefined command templates to the real-time camera feed.

Overall, this is a very exciting project, because of the number of technologies that must be developed and interfaced in parallel such as Bluetooth communication, Arduino embedded systems, Node.js applications, and OpenCV libraries. Our team’s website can be found here:


Concurrent OS Command Executor for Linux using Java Virtual Machine


For my Java Applications class we were required to incrementally design a Java software system that could run Operating System commands in parallel. I chose Linux Ubuntu as my testing OS, however this program should technically be portable to any system with Java installed. I designed this program to incorporate modular components that do not contain dependencies among each other. This should allow quick concurrent execution of commands without much memory space collisions so long as the user does not define the OS commands to conflict.

This application also provides an editable text file, where a user can modify, add, or delete as many parallel “jobs” as they want to run. These “jobs” are essentially shell scripts that can be run sequentially. The standard output and error can also be retrieved by the user for use in their own program.

To test the program, I made 5 copies of the OS’s English dictionary file and used Grep’s regex packages to search and match words against 5 different tags. Each copy and search of a single dictionary was performed as a separate job (ran in parallel).

Here is a Github repository with source code for the components as well as the aforementioned testing program:


Basic 3D Object Renderer using OpenGL Graphics Libraries and Qt


As part of a final project in my computer graphics class, I was required to develop a graphical user interface with Qt libraries that acted as a wrapper around an OpenGL powered 3D model renderer. This program utilized standard user interface techniques such as pop out menus, radio buttons, spin boxes , mouse tracking , and active message passing to enact updates. Qt is fantastic library with its own standalone IDE and editor to create programs with supporting GUI’s.

OpenGL is one of the most powerful graphics libraries available for free to users. This API interfaces with graphics hardware to render 3D vector graphics. OpenGL involves lots of transformations and manipulations of matrices, which definitely challenged my knowledge of linear algebra πŸ™‚ Here is a description and summary of the elements in my program:

Description of the 3D Model Viewer and Renderer
This program provides an intuitive graphical user interface to access a few of
OpenGL’s many rendering options and techniques, without having to code.
With the ability to load in β€œ.3ds” files; users can test a variety of material
and lighting options to view the output without physically altering the
original data. Users have access to up to three lights and their properties,
material properties, environmental mapping, and a variety of shaders. The
integrated mouse controls also allow users to rotate, pan , and zoom the
perspective to view all aspects of the rendered model.


Basic User’s guide
1. Opening a File and Predefined Models
Users can load any basic β€œ.3ds file into the program using file->open
option. This command will import the vertices and faces of the model,
render them in the scene, and scale the model to be appropriately viewed.
The user can also choose from two more included models: Icosahedron and
Teapot found in the File menu.
2. Shaders Menu
Under the shaders menu user can choose from several shaders that can be
applied to the rendered model:
● Flat Shader
● Smooth Shader
● Phong Shader
● Per Pixel Shader
● Normal Mapping (currently not functional)
● Toon Shader
● Custom Shader 1 (Mouse controlled shading)
● Custom Shader 2 (random coloring)
3. Lights Menu
Under the Lights menu the user can choose to edit one of three different
lights in the scene. Selecting a light shows a properties window where the
following light attributes can be modified:
● Light enabled
● Spot Light/Point Light
● Position
● Direction (Spot Light)
● Spot Light Cutoff Angle
● Coefficients for Light Attenuation
● RGB Ambient Values
● RGB Diffuse Values
● RGB Specular Values
4. Material Menu
Within the material menu, user can modify the material attributes applied
to the rendered model. The properties window allows the following values
to be altered:
● Set default material
● RGB Ambient Values
● RGB Diffuse Values
● RGB Specular Values
● RGB Emissive Values
● Shininess
5. Environment Map Menu
Within the environment map menu users can enable or disable environment
mapping on the rendered object. Users can also specify the path for desired
images to be mapped onto the object. Currently only β€œ.tga” files are
6. User Camera Control
● Rotate Camera (hold down left mouse button while moving mouse)
● Pan Camera (hold down right mouse button while moving mouse)
● Zoom in (move mouse wheel forward)
● Zoom out (move mouse wheel backward)

program1 program2

Here is Github repository with all the source code:


Warning: One of my custom fragment shaders uses a random number generator to determine pixel color values. Epileptics should probably avoid this feature…