#omnivision — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #omnivision, aggregated by home.social.
-
https://www.europesays.com/ch/49066/ Huaqin’s US$580 million Hong Kong share sale attracts JPMorgan and UBS #AIoT #Beijing #ChinaInsightsConsultancy #ConsumerElectronics #GigaDevice #HongKong #Huaqin #JPMorganChase #NVIDIA #OmniVision #Shanghai #SmartVehicles #SmartWearables #UBS #VictoryGiant
-
https://www.europesays.com/ch/49031/ Huaqin’s US$580 million Hong Kong share sale attracts JPMorgan and UBS #AIoT #Beijing #ChinaInsightsConsultancy #ConsumerElectronics #GigaDevice #HongKong #Huaqin #JPMorganChase #NVIDIA #OmniVision #Shanghai #SmartVehicles #SmartWearables #UBS #VictoryGiant
-
Video processing on Microcontrollers, Asics and FPGAs. This also covers protocol bridges.
-
I'm looking for a CMOS camera sensor for a FOSH project. I'm just looking for a surface mount chip without the lens.
It looks like two big brands are onsemi and omnivision. I've applied for NDA access to the datasheets, but I'm wondering if there are good options with open specs? How do people generally release FOSH based on chips with NDA specs?
I've gotten the OV7670 prototype boards configured using I2C and I can read frames out using the parallel interface.
Parallel interfaces are easy. Many cameras now have MIPI interfaces. Is it easy to read MIPI data? I am planning to use an FPGA to read the camera data, so although parallel would be easiest, I imagine there could be a MIPI Vertilog 2005 implementation that I could compile to run in my FPGA?
#foss #fosh #camera #sensor #camerasensor #fpga #mipi #onsemi #omnivision #cmossensor
-
I'm looking for a CMOS camera sensor for a FOSH project. I'm just looking for a surface mount chip without the lens.
It looks like two big brands are onsemi and omnivision. I've applied for NDA access to the datasheets, but I'm wondering if there are good options with open specs? How do people generally release FOSH based on chips with NDA specs?
I've gotten the OV7670 prototype boards configured using I2C and I can read frames out using the parallel interface.
Parallel interfaces are easy. Many cameras now have MIPI interfaces. Is it easy to read MIPI data? I am planning to use an FPGA to read the camera data, so although parallel would be easiest, I imagine there could be a MIPI Vertilog 2005 implementation that I could compile to run in my FPGA?
#foss #fosh #camera #sensor #camerasensor #fpga #mipi #onsemi #omnivision #cmossensor
-
I'm looking for a CMOS camera sensor for a FOSH project. I'm just looking for a surface mount chip without the lens.
It looks like two big brands are onsemi and omnivision. I've applied for NDA access to the datasheets, but I'm wondering if there are good options with open specs? How do people generally release FOSH based on chips with NDA specs?
I've gotten the OV7670 prototype boards configured using I2C and I can read frames out using the parallel interface.
Parallel interfaces are easy. Many cameras now have MIPI interfaces. Is it easy to read MIPI data? I am planning to use an FPGA to read the camera data, so although parallel would be easiest, I imagine there could be a MIPI Vertilog 2005 implementation that I could compile to run in my FPGA?
#foss #fosh #camera #sensor #camerasensor #fpga #mipi #onsemi #omnivision #cmossensor
-
I'm looking for a CMOS camera sensor for a FOSH project. I'm just looking for a surface mount chip without the lens.
It looks like two big brands are onsemi and omnivision. I've applied for NDA access to the datasheets, but I'm wondering if there are good options with open specs? How do people generally release FOSH based on chips with NDA specs?
I've gotten the OV7670 prototype boards configured using I2C and I can read frames out using the parallel interface.
Parallel interfaces are easy. Many cameras now have MIPI interfaces. Is it easy to read MIPI data? I am planning to use an FPGA to read the camera data, so although parallel would be easiest, I imagine there could be a MIPI Vertilog 2005 implementation that I could compile to run in my FPGA?
#foss #fosh #camera #sensor #camerasensor #fpga #mipi #onsemi #omnivision #cmossensor
-
I'm looking for a CMOS camera sensor for a FOSH project. I'm just looking for a surface mount chip without the lens.
It looks like two big brands are onsemi and omnivision. I've applied for NDA access to the datasheets, but I'm wondering if there are good options with open specs? How do people generally release FOSH based on chips with NDA specs?
I've gotten the OV7670 prototype boards configured using I2C and I can read frames out using the parallel interface.
Parallel interfaces are easy. Many cameras now have MIPI interfaces. Is it easy to read MIPI data? I am planning to use an FPGA to read the camera data, so although parallel would be easiest, I imagine there could be a MIPI Vertilog 2005 implementation that I could compile to run in my FPGA?
#foss #fosh #camera #sensor #camerasensor #fpga #mipi #onsemi #omnivision #cmossensor
-
OmniVision’s New Smartphone Sensor Promises the Industry’s ‘Highest Dynamic Range’ https://petapixel.com/2025/04/14/omnivisions-new-smartphone-sensor-promises-the-industrys-highest-dynamic-range/ #sensortechnology #dynamicrange #imagesensor #Technology #omnivision #smartphone #Mobile #mobile #phone #News
-
Edge-Ready #Vision Language Model Advances Visual #AI Processing 🌟
🧠 #OmniVision (968M params) sets new benchmark as world's smallest #VisionLanguageModel
🔄 Architecture combines #Qwen2 (0.5B) for text & #SigLIP (400M) for vision processing
💡 Key Innovations:
• 9x token reduction (729 → 81) for faster processing
• Enhanced accuracy through #DPO training
• Only 988MB RAM & 948MB storage required
• Outperforms #nanoLLAVA across multiple benchmarks🎯 Use Cases:
• Image analysis & description
• Visual memory assistance
• Recipe generation from food images
• Technical documentation supportTry it now: https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo
Source: https://nexa.ai/blogs/omni-vision -
Edge-Ready #Vision Language Model Advances Visual #AI Processing 🌟
🧠 #OmniVision (968M params) sets new benchmark as world's smallest #VisionLanguageModel
🔄 Architecture combines #Qwen2 (0.5B) for text & #SigLIP (400M) for vision processing
💡 Key Innovations:
• 9x token reduction (729 → 81) for faster processing
• Enhanced accuracy through #DPO training
• Only 988MB RAM & 948MB storage required
• Outperforms #nanoLLAVA across multiple benchmarks🎯 Use Cases:
• Image analysis & description
• Visual memory assistance
• Recipe generation from food images
• Technical documentation supportTry it now: https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo
Source: https://nexa.ai/blogs/omni-vision -
Edge-Ready #Vision Language Model Advances Visual #AI Processing 🌟
🧠 #OmniVision (968M params) sets new benchmark as world's smallest #VisionLanguageModel
🔄 Architecture combines #Qwen2 (0.5B) for text & #SigLIP (400M) for vision processing
💡 Key Innovations:
• 9x token reduction (729 → 81) for faster processing
• Enhanced accuracy through #DPO training
• Only 988MB RAM & 948MB storage required
• Outperforms #nanoLLAVA across multiple benchmarks🎯 Use Cases:
• Image analysis & description
• Visual memory assistance
• Recipe generation from food images
• Technical documentation supportTry it now: https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo
Source: https://nexa.ai/blogs/omni-vision -
Edge-Ready #Vision Language Model Advances Visual #AI Processing 🌟
🧠 #OmniVision (968M params) sets new benchmark as world's smallest #VisionLanguageModel
🔄 Architecture combines #Qwen2 (0.5B) for text & #SigLIP (400M) for vision processing
💡 Key Innovations:
• 9x token reduction (729 → 81) for faster processing
• Enhanced accuracy through #DPO training
• Only 988MB RAM & 948MB storage required
• Outperforms #nanoLLAVA across multiple benchmarks🎯 Use Cases:
• Image analysis & description
• Visual memory assistance
• Recipe generation from food images
• Technical documentation supportTry it now: https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo
Source: https://nexa.ai/blogs/omni-vision -
Edge-Ready #Vision Language Model Advances Visual #AI Processing 🌟
🧠 #OmniVision (968M params) sets new benchmark as world's smallest #VisionLanguageModel
🔄 Architecture combines #Qwen2 (0.5B) for text & #SigLIP (400M) for vision processing
💡 Key Innovations:
• 9x token reduction (729 → 81) for faster processing
• Enhanced accuracy through #DPO training
• Only 988MB RAM & 948MB storage required
• Outperforms #nanoLLAVA across multiple benchmarks🎯 Use Cases:
• Image analysis & description
• Visual memory assistance
• Recipe generation from food images
• Technical documentation supportTry it now: https://huggingface.co/spaces/NexaAIDev/omnivlm-dpo-demo
Source: https://nexa.ai/blogs/omni-vision