Hello.

I'm Ian Latchmansingh, Principal Design Technologist for AWS.



Prototypes

Contextual Augmented Reality Interfaces for the Internet of Things • Consumer Care Care Subscription Services • Virtual Remote Auditing for GxP Compliance • Interactive Journalism • IoT Provisioning and Monitoring for Casino Gaming • VUI Design for Clinical Trials and Medication Adherence • Industrial Cobot Orchestration for Automated Material Replenishment • Broadcast Monitoring using Machine Learning and Computer Vision • Distributed Consumables Procurement for Container Shipping • Connected Devices for C-Store Cleanliness Monitoring and Task Management • Virtual Doping Detection for Physical E-Sports • Predictive Fleet Vehicle Maintenance for SMBs • Agricultural Cost Center Business Intelligence for IoT-equipped Farms • Realtime Inventory Forecasting for Just-in-Time Manufacturing • Supply Chain Weakness Dashboards for Agricultural Logistics • Intelligent Damage Detection for Rental Vehicles • Procedural Terrain Generation for Robotics Simluation and Machine Learning • Municipal Infrastructure Prediction and Planning Tools • Hybrid UIs for Realtime Customer Service Analytics • Disaster Risk Monitoring Search Interfaces for Restoration Contractors • VUIs for Drive Thru Restaurants






Bio


My résumé is best seen on LinkedIn. Otherwise, you can reach me at ianlatchmansingh [at] gmail.com. I am a Principal Design Technologist for AWS Prototyping & Cloud Engineering in NYC. Previously a UX Director for start-ups and digital creative advertising. I design and build human-centric prototypes that employ Artificial Intelligence, Machine Learning, Robotics, Internet of Things as well as Augmented & Virtual Reality. Brands I’ve worked with include: NASA/JPL, Wendy's, The NBA, Boeing, The Ad Council, Discovery Channel, Disney, Comcast, and John Deere - to name a few.



©
LinkedIn        Medium        Github

Broadcast Monitoring using Machine Learning and Computer Vision


Broadcast monitoring is a service provided to broadcasters and over-the-top (OTT) streamers that provides a significant number of quality checks on a given media source. They can be relatively minor errors like spelling or audio volume, or more critical issues like content errors (broadcasting the wrong media) and incorrect audio (wrong language or content).

Traditionally, the higher-level quality checks are conducted manually by humans who are constantly watching broadcast streams for issues and escalating them to the sources. An operator may be watching anywhere from six to 34 simultaneous streams, which indicates this is non-scalable with the available workforce. As OTT streams in particular increase, it may become essential for quality monitoring services to augment their workforce with machine learning.

This solution, for which I designed and developed the interface, allows for the automation of higher-level monitoring tasks that were previously manual chores. This enables human workers to focus on higher-level tasks, take action sooner, and handle a higher volume of broadcasts without sacrificing efficacy.

This prototype was developed in six weeks alongside engineers Adam Best and Angela Rouhan Wang, who utilized AWS AI services like Amazon Rekognition to analyze the content of an HTTP Live Streaming (HLS) video stream. This is done in near real-time (sub 15 seconds per sample).










Procedural Terrain Generation for Robotics Simulation and Machine Learning

These terrain generators were initially developed for the AWS JPL (yes, that Jet Propulsion Laboratory) Open Source Rover Challenge. This was a virtual hackathon that challenged contestants to improve how rovers on Mars may operate on the unpredictable terrain. 

This design is a fully-procedural particle system that can:

  • distort the surface
  • scatter obstacles of varying complexity, size, and frequency
  • dynamically set the rover origin point to a flat, transitional surface

This was later repurposed to act as an environment generator for office enviroments as well.









Contextual Augmented Reality Interfaces for the Internet of Things

In the year 2019, 833 million smart home devices were shipped worldwide - an increase of nearly 27% over the previous year. These products range from conventional appliances like lights, refrigerators, and televisions to more niche items like breadmakers and connected cradles.

With the ever-increasing sprawl of the IoT landscape, it seemed only logical that we begin to visualize and manipulate these devices in a singular, contextual, spatial interface. I worked with Ramin Firoozye to craft a UI prototype for what his ARIoT concept would look like in practice. The results of which were  revealed at AWS Reinvent 2019.



Components of the design system:

  • Device Branding
  • Device Iconography
  • Device Name
  • Controllable Parameter Information

The initial concept of a connected home was also extended to agriculture technology.






Experiments


The following entries are prototypes, experiments, and other notions at the intersection of design and technology.

NPM: ThreeJS Lighting Setups


Basic, parameterized lighting setups for browser-based 3D product visualizations.
View on NPM Registry