Shop All Categories
Trending products
Arducam
You're up and running in 3 clicks. On Raspberry Pi, Mac, Windows & Linux.
It just works. And it's open source - so you can dig in and change anything you want.
Tons of Power. Tons of Examples. You're up and running in 30 seconds:
3D object detection is what humans do. We know where objects are - and where they are in physical space. It's why we can pick up a coffee cup, or catch a ball.
Get the pose of objects in yaw, pitch, and role - in addition to where they are in physical space.
3D depth information that is semantically labeled per pixel. This allows your robot to stay on the sidewalk, or know when there's some object in your path.
Track hands and/or full body pose in full 3D coordinates.
Motion Capture - without any additional hardware/sensors/etc. with direct Unity tie-in.
The possibilities are endless. From keeping hands safe - to all sorts of interactive control.
Pair with a holographic display from the Looking Glass Factory... and make the future now.
Someone giving you elevator eyes? Now your robot can know this too. Or your smart dash cam. Or your assistive robot that will grab what you’re looking at.
Know the full speed, trajectory, and vehicle type/color, license plate region/province, and plate number.
The spatial data of OAK-D-Lite doing privacy-first analytics of dwell time (in physical space), the direction of travel, and people counting - including direction, travel speed, and interest estimation.
Edge detection is useful in so many ways. Find the edge of the lane, use it to better get 3D locations of objects (semantic labeling to the edge), or just have a fantastic real-time techno dance party. It's incredibly fun and fascinating to play with.
This allows your robot to orient itself, and know how it's moving, visually. Just like Lionel Messi does when sprinting a field, now your thing can too.
This alone is a reason to have an OAK-D-Lite (or 10) in your toolbox (or schoolbox) - you can now record 4k video on a Pi Zero! 373MB/s -> 3.125MB/s - all on-camera. h.264, h.265, MJPEG, JPEG, and lossless JPEG at up to 13MP.
The built-in 13MP color camera can be lossless zoomed 6x and panned - all guided by AI and with 1080p HD or 4k encoded output.
The onboard AI can be used to automatically blur faces (or any other details) prior to video encoding - to allow having a video of what you need (e.g. is the product damaged?) without sacrificing the privacy of people.
With the previous OAK—D KickStarter, we enabled thousands to harness the power of Spatial AI. But too many end up in the situation where they wished they could use it to solve their problem - because it is a cheat mode for so many problems - but it was simply too expensive to use it.
If you passed sixth grade - you got this! OAK-D-Lite is for you. Students, Artists, Raspberry Pi Hackers, K12 Educators, RC Car Racers, Roboticists, Python Developers, AR/VR Enthusiasts, Unity Hackers, Computer Vision, and AI Experts alike.
Your imagination is the only limit. OAK—D. LITE gives you a human-like perception as a building block.
From minute 0.
The Open Source Cortic AI Toolkit allows you to drag and drop to make powerful custom applications. And better yet, also runs on Raspberry Pi - a perfect pairing for OAK-D Lite.
The DepthAI pipeline builder allows you to do things like below. Where a person is found, then the limbs are found, and then from there the hand is found, and from there a hand-pose (and/or sign-language) model is run on the hand. The 13 MegaPixel helps a TON here - to have enough pixels left after the whole pipeline has effectively 40x zoomed.
Seriously. The Mini Pupper is already a blast. But with OAK-D-Lite vision, it's now your Border-Collie-level smart robotic dog. Border Collies are amazing because they can learn very specific hand signals (and they're neurotic about them). Now Mini Pupper can be just as amazing (and neurotic)!
Knowing what things are, and where - in real-time on a low-cost device - has the promise of allowing all sorts of multi-player experiences. Imagine if Laser Tag was no longer tech from the 1980s? We think OAK-D-Lite will help here.
The built-in AI unlocks all sorts of fun applications. Your imagination is the only limit.
Finally... a Roomba for your back yard. And you can make one!
The global shutter cameras - which give depth at over 200FPS - allow the device to see at speeds that a human can. So just like a person can track a ping pong ball in physical space - OAK-D-Lite can too. But since it's tech-based, it can record the position of the ball at 200+FPS. Full trajectory and speed of your shots!
Feature tracking, depth, object detection in 3D space. OAK-D-Lite is like a cheat mode for robotic navigation.
The Looking Glass Factory has solved the Holographic part. But the capture part of holograms is still painful. We think that over time - and with community support - OAK-D-Lite will be a piece of allowing real-time Holographic communication for the first time.
Here’s our 3D Engineer Sachin on the Looking Glass Portrait.
Using the feature tracking above - and an external IMU - it is now possible to estimate your (robot's) position without GPS.
Depth and AI allow making powered wheelchairs automatically prevent commands that would cause harm (like an errant comment that would bring the wheelchair down a flight of stairs).
Not kidding. It can do this. And Cortic Technology makes an Open Source education platform that makes building this easier than building the lego that holds OAK-D-Lite
Keep pre-term babies safe using human-like perception to know if everything is OK. This allows not having potentially dangerous, invasive monitoring. And also acts as a force multiplier for doctors, nurses and parents.
We actually didn't believe this one at first. It's crazy such a thing can be detected passively.
The combination of depth and artificial intelligence allows you to monitor things that previously would feel like science fiction. Identify the sections of the chest with AI, and use depth to know the breathing profile over time.
Just like helping people. This can be used to know when animals are stressed, injured, or otherwise have a health problem - by monitoring behavior and activity in physical space.
Build off of OAK, bluebox Co-pilot keeps your vehicle safe, you safe, and can automatically log (other) aggressive drivers and dangerous situations.
This is what we all dreamed of as kids. It's now possible. So jealous.
Yep, in 3D Space too. We didn't even know this was a thing. But OAK-D-Lite can do it. Thanks to geax. Gotta love open source.
The GIF doesn't do it justice. This is a beautiful 4K video.
Automatically record 4k videos and/or beautiful 13MP JPEGs when certain species are detected.
Pick only the ripe ones. Sort by ripeness. Avoid people with Strawberry Allergies.
Lasers instead of harmful chemicals. Feels like science fiction. It's the future.
This one is above our heads. But super cool and agriculture geeks have to love this, right?
We get this one. Algorithms are great at packing. Given them the information they need, real-time. Let robotics pack for us!
I can't touch my toes. With OAK-D-Lite... maybe I'll be able to.
Only some tiny portion of the Ocean is explored. No, really. IPOZ is changing that with OAK technology.
When we saw this we were blown away! We couldn't control ourselves. Thanks, Boris!
Depth view gives an idea of what is happening - while respecting privacy. And you can just send metadata instead of images at all.
AI for detecting defects is great. But often without depth, the error rate is just too high. Depth is the missing piece - a piece that humans have - which allows the accuracy to be sufficiently high to finally be useful.
The capability to mimic a human-like understanding of what things are and where enables tons of human-machine safety applications.
Since the counting is done on-device - there's actually no need to return anything more than the metadata itself (the crowd count). Enabling communication over extremely-low bandwidth links like LoRa.
This is what started it all. And it remains our North Star. Distracted driving causes fender benders when between cars. When between a car and a person, it kills.
Spatial AI can be used to change the course of history as it’s happening. To prevent accidents as they were going to occur.
OAK—D. LITE is built on the open-source DepthAI ecosystem, with out-of-the-box supported by the following frameworks, but since the API (and hardware) are open-source, it’s easy to extend support to nearly any language & framework.
Out-of-the-box Python, C++, Java, ROS & ROS2, Roboflow, CEP Support
Works with any OS. You name it. Pre-built binaries are regularly released for Mac, Windows, Ubuntu, Raspberry Pi OS, and the Jetson series.
And since it’s heavily open source (including being able to download the design files), every day we wake up to some new integration with a new language, or suite of tools, etc.
There’s a famous adage that any useful technology will eventually send an email and be available in Javascript… both of which are true here.
We worked hand-in-hand with ArduCam to design cameras from the ground up for OAK-D Lite.
The mechanical design of OAK-D Lite was highly iterative - taking in feedback from Alpha testing and initial field deployments of the first prototypes built to refine to the final design.
SN-000226-ADC
| Price |
|---|
| SKU |
| Rating |
| Discount |
| Vendor |
| Tags |
| Weight |
| Stock |
| Short Description |
Description here