Scroll down
Robots that's "Intuitive" with Objects

CyRo

VERSION ALPHA

CLX1

Robots that's "Intuitive" with Objects

CyRo

VERSION ALPHA

CLX1

RELEASE VERSION (LAUNCHING THIS JUNE)

CLX1- Vision Stack

An Intelligence stack that instinctively sees any object in any environment, with NO training. Just like a baby's brain, this unique HW & SW Vision Platform allows building of machines that are intuitive about objects like never before - From Robots assembling parts in Factory lines to Autonomous Driver Assistance to Object Search.

"Adapts to any amount of lighting variations"
"Identifies even mirror-like reflective parts"
A camera that "sees" Motion & "feels" Force
Explore the intricacies of our innovative technology and discover the endless possibilities.
CyRo at Denso - Hot Swapping Robot between Multiple Assembly Tasks ...
CyRo at Boston Robotics Summit & Expo
CyRo -Training Free Grasp of Any Object
Zero Training to Pick
Three robot Configuration (Assembly and warehousing application demo)
Timed Random picks trails
CyRo Meets People - Unveiling the Object Intelligence
High Speed Adaptive Machine Tending
Depth from Autofocus
Interested? To know more about CynLr
GET IN TOUCH
OBJECT MANIPULATION

Object Processors that learn to See and Manipulate any Object

Grasp any object
without pre-training

Just like how humans can grasp unknown objects they have never seen before, CynLr’s technology enables robots to grasp any object without any pre-training. We only learn objects after we pick them up.

Learn to manipulate
and re-orient

We identify an object by its Shape & Colour. But, when the object's orientation changes, the same object assumes different shapes and colours. Our technology allows robots to instinctively learn to put together an object from all its complex shapes and orientations and know how to re-orient them.

Make oriented
placements

One can’t build a car by throwing parts at each other. We enable robots to not only grasp objects from unstructured scenarios, but learn to make oriented placements to achieve desired outcomes.

How can this be beneficial for industries?

Part Feeding

An object is characterized by its shape, and its shape as perceived by the observer varies for every small change in orientation. Our technology allows robots to learn to manipulate objects in any orientation.

Part Mating

An object is characterized by its shape, and its shape as perceived by the observer varies for every small change in orientation. Our technology allows robots to learn to manipulate objects in any orientation.

Explore the intricacies of our innovative technology and discover the endless possibilities.
Convergence Depth Mapping and ...
Event imaging on Mirror...
Tracking and Grasping...
Gripping of Delicate objects...
Robots

Robots deserves better than Sensor (Con) Fusion

Con Fused
Sensors

The Human eye doesn't use different sensors for Motion, Depth and Colour.

Making the cacophony of RADAR, LIDAR & Vision Fusion Redundant. One Vision Platform To Rule Them All. CynLr's Vision system sees Motion, Depth and Colour, all at once, in same resolution, in sync through the same pair of eyes.

Convergence
of eyes

Ever wondered why all animals "Converge" their eyes? Depth is perceived through Convergence.

Convergence gives 10x faster Depth at 3x the resolution than traditional stereoscopy. No more wasting compute power in calibrating the images and feature-extraction for constructing depth. A lidar and camera in one.

Create rich visual physics models of objects

"Sight" is not "Vision". Vision occurs when sight overlaps with all other senses, giving meaning to the colours that we see - a mango or a spoon.

Human Vision understands Objects through 7 different dimensions of information - not just colour and depth. We create rich visual physics models of objects through combination of Liquid Lens Optics, Optical Convergence, Temporal Imaging, Hierarchical Depth Mapping and Force-Correlated Visual Mapping and many such technologies.

Interested? To know more about Cynlr
GET IN TOUCH