#neural-engine — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #neural-engine, aggregated by home.social.
-
AI-Driven ‘Guitar Wiz’ App Transforms the iPhone and Apple Watch into a World-Class Music Tutor
#TycoonWorld #GuitarWiz #AIMusic #MusicTech #AIInnovation #ArtificialIntelligence #AppleEcosystem #iPhoneApp #AppleWatch #EdTech #MusicEducation #DigitalLearning #StartupIndia #BengaluruStartups #TechInnovation #MobileAppDevelopment #AIStartup #NeuralEngine #FutureOfLearning #CreativeTechnology #InnovationInMusic #MusicIndustryTech #GlobalStartups #AppInnovation
-
I Wanted Podcast Transcriptions. iOS 26 Delivered (and Nearly Melted My Phone).
Testing iOS 26’s on-device speech recognition: faster than realtime, but your phone might disagree
Apple’s iOS 26 introduced
SpeechTranscriber– a promise of on-device, private, offline podcast transcription. No cloud, no subscription, just pure silicon magic. I built it into my RSS reader app. Here’s what actually happened.The Setup
- Device: iPhone 17 Pro Max (Orange, if you’re curious)
- iOS Version: 26.2
- Test Episodes:
- The Talk Show #436 (95 minutes)
- Upgrade #594 (106 minutes)
- ATP #668 (114 minutes)
- ATP #669 (122 minutes)
The Good News: It’s Actually Fast
EpisodeDurationTranscription TimeRealtime FactorWordsWords/secTalk Show #4361h 35m15m 22s6.2x17,30318.8Upgrade #5941h 46m20m 4s5.3x19,97516.6ATP #6681h 54m24m 49s4.6x23,89216.04.6x to 6.2x faster than realtime. Nearly 2-hour podcasts transcribed in under 25 minutes. The Neural Engine absolutely crushes this.
The Pipeline Breakdown
The transcription happens in two phases (example from Upgrade #594):
- Audio Analysis: 2m 2s
- Initial pass through the audio file
- Roughly 1 second of analysis per minute of audio
- Results Collection: 18m 0s
- Iterating through ~1,288 speech segments
- Each segment yields transcribed text
The Bad News: Thermal Throttling Is Real
During my first test, I made a critical mistake: running two transcriptions simultaneously while charging.
The result? My phone got noticeably hot. Battery optimization warnings appeared. And performance dropped dramatically:
ConditionRealtime FactorPerformance HitSingle transcription4.6x – 6.2xBaselineTwo parallel transcriptions2.7x46% slowerThe logs showed alternating progress updates as iOS juggled both workloads:
🎙️ 📝 Progress: 34% - 88 segments // Transcription A 🎙️ 📝 Progress: 44% - 98 segments // Transcription B 🎙️ 📝 Progress: 37% - 98 segments // Transcription AThe Neural Engine throttles hard when thermals get bad. When I ran a single transcription without charging, the ETA stayed consistent and completed on schedule.
The Ugly: iOS Kills Background Tasks
Even with
BGTaskScheduler, iOS terminated my background transcription:🎙️ Background transcription task triggered by iOS ⏱️ Background transcription task expired (iOS terminated it)For long podcasts, you need to keep the app in foreground. iOS’s aggressive app suspension doesn’t play nice with hour-long ML workloads.
AI Chapter Generation: The Real Win
Here’s where it gets interesting. Once you have a transcript, generating AI chapters is blazingly fast.
Note: ATP, Talk Show, and Upgrade already include chapters via ID3 tags – this is an experiment to see what on-device AI can generate. But Planet Money doesn’t have chapters, making it a real use case where AI generation adds genuine value.
And we’re not alone in this approach. As Mike Hurley and Jason Snell discussed on Upgrade #594, Apple is doing exactly this in iOS 26.2’s Podcasts app:
“One of the most interesting things to me is the changes in the podcast app in 26.2… AI generated chapters for podcasts that do not support them… They are creating their own chapters based on the topics.”
Jason nailed the insight: “The transcripts [are] a feature that unlocks a lot of other features, because now they kind of understand the content of the podcast.”
That’s exactly what we’re doing here – using on-device transcription as a foundation for AI-powered chapter generation:
EpisodeTranscript SizeChapters GeneratedTimeATP #669143,603 chars (~26,387 words)27 chapters2m 1sTalk Show #436~17,303 words13 chapters1m 40sThe AI identified topic changes, extracted key phrases for timestamps, and generated descriptive chapter titles – all in under 2 minutes for multi-hour podcasts.
Sample generated chapters:
📍 0:00-2:18: Snowfall in Richmond 📍 42:43-49:11: Intel-Apple Chip Collaboration Speculations 📍 62:46-65:00: Executive Transitions at Apple 📍 95:56-105:04: Core Values and Apple's EvolutionThe Code
Using iOS 26’s
SpeechTranscriberis surprisingly clean:@available(iOS 26.0, *) func transcribe(fileURL: URL) async throws -> String { let locale = try await findSupportedLocale(preferring: "en") let transcriber = SpeechTranscriber(locale: locale, preset: .transcription) let analyzer = SpeechAnalyzer(modules: [transcriber]) let audioFile = try AVAudioFile(forReading: fileURL) if let lastSample = try await analyzer.analyzeSequence(from: audioFile) { try await analyzer.finalizeAndFinish(through: lastSample) } var transcription = "" for try await result in transcriber.results { if result.isFinal { transcription += String(result.text.characters) + " " } } return transcription }Fast vs Accurate Mode: A Surprising Finding
iOS 26 offers two main transcription presets:
.transcription– Standard accurate mode.progressiveTranscription– “Fast” mode with progressive results
I assumed Fast mode would be… faster. The results were mixed.
EpisodeModeConditionRealtime FactorWords/secTalk Show #436AccurateSolo, cold6.2x18.8Upgrade #594AccurateSolo5.3x16.6ATP #668AccurateSolo4.6x16.0Planet MoneyFastSolo3.8x12.2Planet MoneyAccurateSolo, warm3.5x11.4On the same 31-minute episode, Fast mode (3.8x) was slightly faster than Accurate (3.5x). But both were significantly slower than the longer episode tests – likely due to residual heat from previous runs.
The “progressive” preset appears optimized for live/streaming transcription. For batch processing of pre-recorded files, results are similar when thermals are equivalent.
Lesson: Don’t assume “fast” means faster for your use case. Profile both.
Recommendations
- Use
.transcriptionfor downloaded files – It’s actually faster for batch processing - Don’t charge while transcribing – Thermal throttling is real
- One transcription at a time – The Neural Engine doesn’t parallelize well
- Keep the app in foreground – iOS will kill background ML tasks
- Expect ~5x realtime – About 12-13 minutes per hour of audio under ideal conditions
The Verdict
iOS 26’s on-device transcription is genuinely impressive:
- Privacy: Audio never leaves your device
- Speed: 5x faster than realtime (when not throttled)
- Quality: Surprisingly accurate for conversational podcasts
- Offline: Once the model is downloaded, no internet required
The main gotchas are thermal management and iOS’s background task limitations. But for a first-generation on-device transcription API? Apple’s Neural Engine delivers.
Now if you’ll excuse me, I have 26,387 words of ATP to search through.
Tested on iPhone 17 Pro Max running iOS 26.x. Your mileage may vary on older devices.
Raw Test Data
Upgrade #594
- Audio Duration: 1h 46m 24s (106 min)
- Audio Analysis Phase: 2m 2s
- Results Collection Phase: 18m 0s
- Total Transcription Time: 20m 4s
- Realtime Factor: 5.3x (faster than audio playback)
- Words Transcribed: 19,975
- Processing Rate: 16.6 words/sec
- Segments Processed: 1,288
ATP #668
- Audio Duration: 1h 53m 54s (114 min)
- Audio Analysis Phase: 2m 20s
- Results Collection Phase: 22m 28s
- Total Transcription Time: 24m 49s
- Realtime Factor: 4.6x (faster than audio playback)
- Words Transcribed: 23,892
- Processing Rate: 16.0 words/sec
- Segments Processed: 1,557
ATP #669 Chapter Generation
- Audio Duration: 2h 2m 13s (122 min)
- Transcription Size: 143,603 characters, ~26,387 words
- Chapters Generated: 27
- Total Time: 2m 1s
- Processing Rate: ~219 words/sec
Talk Show #436
- Audio Duration: 1h 35m 52s (95 min)
- Audio Analysis Phase: 1m 37s
- Results Collection Phase: 13m 44s
- Total Transcription Time: 15m 22s
- Realtime Factor: 6.2x (faster than audio playback) ← Fastest test!
- Words Transcribed: 17,303
- Processing Rate: 18.8 words/sec
- Segments Processed: 971
Talk Show #436 Chapter Generation
- Transcription Size: ~17,303 words
- Chapters Generated: 13
- Total Time: 1m 40s
Planet Money – Chicago Parking Meters (Fast Mode)
- Audio Duration: 30m 56s (31 min)
- Audio Analysis Phase: 1m 3s
- Results Collection Phase: 7m 5s
- Total Transcription Time: 8m 9s
- Realtime Factor: 3.8x
- Words Transcribed: 5,981
- Processing Rate: 12.2 words/sec
- Segments Processed: 472
- Mode:
.progressiveTranscription(Fast)
Planet Money Chapter Generation (Fast Mode)
- Transcription Size: ~5,981 words
- Chapters Generated: 8
- Total Time: 31.9 sec
Planet Money – Accurate Mode (Parallel Stress Test)
- Audio Duration: 30m 56s (31 min)
- Audio Analysis Phase: 1m 9s
- Results Collection Phase: 10m 8s
- Total Transcription Time: 11m 19s
- Realtime Factor: 2.7x ← Severely throttled (ran 2 simultaneous)
- Words Transcribed: 5,983
- Processing Rate: 8.8 words/sec
- Segments Processed: 476
- Mode:
.transcription(Accurate) - Note: Ran in parallel with another transcription – 46% performance hit
Planet Money – Accurate Mode (Solo, Warm Device)
- Audio Duration: 30m 56s (31 min)
- Audio Analysis Phase: 1m 11s
- Results Collection Phase: 7m 32s
- Total Transcription Time: 8m 44s
- Realtime Factor: 3.5x ← Device still warm from previous tests
- Words Transcribed: 5,983
- Processing Rate: 11.4 words/sec
- Segments Processed: 477
- Mode:
.transcription(Accurate) - Note: Slightly slower than Fast mode on same episode (thermal impact)
Device Observations
- Thermal: Significant heat when running multiple transcriptions while charging
- Thermal Carryover: Running tests back-to-back shows degraded performance (6.2x cold → 3.5x warm)
- Cool-down Recommended: Wait 5-10 minutes between long transcriptions for optimal performance
- Battery Notifications: Battery optimization warnings triggered during parallel operations
- Background Tasks: iOS terminated BGTaskScheduler tasks during long transcriptions
- Beta Warning:
Cannot use modules with unallocated locales [en_US (fixed en_US)]– appears in logs but doesn’t block functionality
-
M4 vs. M5: Lohnt sich Apples neuer Prozessor?
Mit dem neuen M5-Chip hebt Apple seine Prozessoren auf ein neues Niveau. Doch lohnt sich der Umstieg vom M4 auf den M5 wirklich für euch?Deutliche Leistungssteigerungen beim M5
Apple hat den M5-Chip als Nachfolger des im Mai 2024 vorgestellen M4 veröffentlicht und verspricht spü
https://www.apfeltalk.de/magazin/news/m4-vs-m5-lohnt-sich-apples-neuer-prozessor/
#Mac #News #Apple #GPU #IPadPro #KI #M4Chip #M5Chip #MacBookPro #NeuralEngine -
Apple stellt M5 vor, mit Fokus auf AI und mehr
Der neue Apple M5-Chip stellt die nächste Generation von Apples hauseigener Prozessortechnologie dar. Er wurde mit 3-Nanometer-Techno
https://www.apfeltalk.de/magazin/feature/apple-stellt-m5-vor-mit-fokus-auf-ai-und-mehr/
#Feature #iPad #Mac #3NanometerTechnologie #AppleIntelligence #AppleM5 #AppleSilicon #GPUArchitektur #IPadPro #KILeistung #MacBookPro #NeuralAccelerator #NeuralEngine #RayTracing #Speicherdurchsatz #UnifiedMemory #VisionPro -
Vision Pro 2: Diese drei Neuerungen erwarten euch
Die Vision Pro von Apple erschien Anfang 2024. Nun deuten Gerüchte darauf hin, dass der Nachfolger schon bald auf den Markt kommt. Wir geben euch einen Überblick zu den erwarteten Verbesserungen.M4 oder M5 Chip: Deutlicher Leistungssprung
Die
https://www.apfeltalk.de/magazin/news/vision-pro-2-diese-drei-neuerungen-erwarten-euch/
#News #Vision #Apple #Headset #Komfort #M4Chip #M5Chip #NeuralEngine #VirtualReality #VisionPro2 #Zubehr -
macOS Tahoe endlich für Mac Studio M3 Ultra verfügbar
Mac Studio M3 Ultra Nutzer:innen können nach langer Wartezeit nun macOS Tahoe installieren. Apple hat mit einem Update das Problem gelöst, das bislang die Installation verhinderte.Fehler bei der Installation von macOS Tahoe
Apple veröf
https://www.apfeltalk.de/magazin/news/macos-tahoe-endlich-fuer-mac-studio-m3-ultra-verfuegbar/
#Mac #News #Apple #Fehlerbehebung #M3Ultra #MacStudio #MacOS2601 #MacOSTahoe #NeuralEngine #Update -
Apples eigene Chip-Strategie ebnet Weg für künftige Fortschritte bei KI
Mit der aktuellen iPhone-Generation baut Apple seine Kontrolle über Hard- und Software weiter aus. Eigene Chip-Entwicklung spielt dabei eine zentrale
https://www.apfeltalk.de/magazin/news/apples-eigene-chip-strategie-ebnet-weg-fuer-kuenftige-fortschritte-bei-ki/
#iPhone #News #A19Pro #Apple #AppleIntelligence #C1XModem #Chips #Energieeffizienz #GPU #Hardware #IPhone17 #KI #KnstlicheIntelligenz #Modem #NeuralEngine -
A19 vs. A19 Pro: Die Chip-Unterschiede im iPhone 17 erklärt
Mit der Einführung des iPhone 17 hat Apple erstmals drei verschiedene Chipvarianten vorgestellt. Ihr fragt euch, was die Unterschiede zwischen dem A19 und dem A19 Pro sind? Wir fassen die Fakten kompakt zusammen.Unterschiede zwischen A19 und A19 Pro
https://www.apfeltalk.de/magazin/news/a19-vs-a19-pro-die-chip-unterschiede-im-iphone-17-erklaert/
#iPhone #News #A19 #A19Pro #Apple #Chipvergleich #GPU #IPhone17 #NeuralEngine -
Diese iOS 26 Funktionen benötigen iPhone 15 Pro oder neuer
Apple hat mit iOS 26 markante Änderungen eingeführt. Eine davon ist das Liquid Glass Design, das auf allen kompatiblen Geräten verfügbar ist. Doch eine Vielzahl der neuen Funk
https://www.apfeltalk.de/magazin/news/diese-ios-26-funktionen-benoetigen-iphone-15-pro-oder-neuer/
#iPhone #News #ASerieChips #AppleIntelligence #AppleWallet #IOS26 #IPhone15Pro #KI #LiquidGlassDesign #Livebersetzung #NeuralEngine #Shortcuts #SpatialScenes -
iPad Air M3 vorgestellt: Schneller und mit 13 Zoll Variante
Apple hat das neueste iPad Air mit dem leistungsstarken M3 Chip auf den Markt gebracht. Dieses Upgrade bringt eine enorme Leistungssteigerung, kombiniert mit fortschrittlicher Grafikarchitektur und integriertem Apple Intelligence-
https://www.apfeltalk.de/magazin/feature/ipad-air-m3-vorgestellt-schneller-und-mit-13-zoll-variante/
#Feature #iPad #AppleIntelligence #chatGPT #IPadAir #M3Chip #MagicKeyboard #NeuralEngine #Raytracing -
Apple streicht Pläne für M4 Extreme
Apple hat laut einem Bericht von The Information die Entwicklung des leistungsstarken „M4 Extreme“-Chips eingestellt. Die Entscheidung fiel bereits im Sommer 2024 und könnte für einige High-End-Mac-Nutzer:innen enttäuschend sein.Apple
https://www.apfeltalk.de/magazin/news/apple-streicht-plaene-fuer-m4-extreme/
#Mac #News #Apple #Broadcom #HighEndMacs #KIServer #KnstlicheIntelligenz #M4Extreme #M4Ultra #MacPro #NeuralEngine #Prozessorentwicklung -
Apple bewirbt A18 Pro Chip: Neuer iPhone 16 Pro Werbespot hebt Leistung hervor
Apple hat einen neuen Werbespot für das iPhone 16 Pro veröffentlicht, der die beeindruckende Leistung des A18 Pro Chips in den Vordergrund stellt. Während viele Werbekampagne
https://www.apfeltalk.de/magazin/news/apple-bewirbt-a18-pro-chip-neuer-iphone-16-pro-werbespot-hebt-leistung-hervor/
#iPhone #News #4KVideo #A18ProChip #Apple #CameraControl #Gaming #IPhone16Pro #NeuralEngine #Performance #USBC #Werbespot -
M4 MacBook Pro Reviews: Leistung, Nano-Textur-Display und neue Features
Die ersten Reviews zum neuen M4 MacBook Pro sind da und zeichnen ein klares Bild von Apples neuem Profi-Notebook. Mit den Chips M4, M4 Pro und M4 Max setzt Apple auf hohe
https://www.apfeltalk.de/magazin/news/m4-macbook-pro-reviews-leistung-nano-textur-display-und-neue-features/
#Mac #News #Apple #Leistungssteigerung #LiquidRetinaXDR #M4 #M4Max #M4Pro #MacBookPro #NanoTexturDisplay #NeuralEngine #Review #Thunderbolt5 -
Everyone does realize the iPhone 8 had the A11 Bionic with a neural engine onboard, right?
In 2017…
And Photos has been doing face and animal recognition since about then?
AI isn’t super new, it’s just a super new hype cycle
-
Blackmagic Design Releases DaVinci Resolve Studio 19.0
Out of beta now.
#DaVinciResolve19 #BlackmagicDesign #ReplayWorkflows #NeuralEngine #Edit #Media #Color #Fusion #ResolveFX #Fairlight #Codecs #ScripotingAPI #Software
-
Display-Panels für M4 MacBook Pro: Auslieferung vor dem Q4-Start
Apple bereitet sich auf den Launch der neuen 14- und 16-Zoll-MacBook-Pro-Modelle mit M4-Chips im vierten Quartal 2024 vor. Laut dem Display-Analysten Ross Young wurden die Display-Panels für diese Mode
https://www.apfeltalk.de/magazin/feature/display-panels-fuer-m4-macbook-pro-auslieferung-vor-dem-q4-start/
#Feature #Mac #Apple #DisplayPanels #M4Chip #MacMini #MacStudio #MacBookAir #MacBookPro #NeuralEngine #Q4Launch #TSMC -
Blackmagic Design Releases Beta 6 of DaVinci Resolve 19 Studio
#DaVinciResolve19 #Beta6 #BlackmagicDesign #ReplayWorkflows #NeuralEngine #Edit #Media #Color #Fusion #ResolveFX #Fairlight #Codecs #ScriptingAPI #HDR
-
Meta Quest führt KI-Funktionen ein
Die Meta Quest VR-Headsets werden in Kürze KI-Funktionen erhalten, noch bevor diese bei der Vision Pro eingeführt wer
https://www.apfeltalk.de/magazin/news/meta-quest-fuehrt-ki-funktionen-ein/
#News #Tellerrand #AppleIntelligence #ExperimentelleFunktionen #KIFunktionen #M2Chip #MetaAIWithVision #MetaQuest #NeuralEngine #PassthroughTechnologie #Prozessorkapazitt #RayBanSmartGlasses #TechnologischeFortschritte #VisionPro #VRHeadsets #WearableTechnologien -
Apple erwartet hohe Verkaufszahlen für das iPhone 16 basierend auf Chip-Bestellungen
Apple hat seine Chip-Bestellung bei TSMC erhöht und plant, sowohl das iPhone 16 als auch das iPhone 16 Pro mit dem A18-Chip auszustatt
https://www.apfeltalk.de/magazin/news/apple-erwartet-hohe-verkaufszahlen-fuer-das-iphone-16-basierend-auf-chip-bestellungen/
#iPhone #KI #News #A18Chip #Apple #AppleIntelligence #ChipStrategie #IPhone15 #IPhone16 #IPhone16Pro #NeuralEngine #Speicher #TSMC #Verkaufszahlen -
Apple veröffentlicht 20 neue Open-Source KI-Modelle
Apple hat auf der Open-Source Plattform Hugging Face 20 neue CoreML-Modelle und vier Datensätze veröffentlicht. Diese sind speziell für Text- und Bild-KI-Anwendungen konzipiert.Die neuen Model
https://www.apfeltalk.de/magazin/news/apple-veroeffentlicht-20-neue-open-source-ki-modelle/
#News #Services #Apple #Bildklassifizierung #CoreML #Datenschutz #HuggingFace #KIEntwicklung #KIModelle #NeuralEngine #OpenSource #Tiefensegmentierung -
iOS 18 steigert KI-Leistung des iPhone 15 Pro Max
Apple hat mit iOS 18 signifikante Verbesserungen in der KI-Leistung des iPhone 15 Pro Max erzielt, wie jüngste Benchmarks zeigen.Neue Benchmark-Ergebnisse verdeutlichen den
https://www.apfeltalk.de/magazin/news/ios-18-steigert-ki-leistung-des-iphone-15-pro-max/
#News #Services #A17ProChip #Apple #Geekbench #IOS18 #IPhone15ProMax #KILeistung #MaschinellesLernen #MLTensor #NeuralEngine #SoftwareOptimierung #TechnologieUpdates #WWDC2024 -
DaVinci Resolve 19 Public Beta 3 Released by Blackmagic Design
#DaVinciResolve19 #BlackmagicDesign #PublicBeta #Beta3 #Software #Workflows #NeuralEngine #Edit #Media #Color #Fusion #ResolveFX #Fairlight #Codecs #ScriptingAPI #Video #AI #Film #Cinema #Television
-
Apple plant bereits die nächste Generation: MacBook Pro mit M4-Chip
Nicht lange nach der Einführung des MacBook Pro und MacBook Air mit dem M3-Chip richtet Apple sein Augenmerk bereits auf die nächste Generation. Gerüchte besagen, dass die Entwicklungsarbeiten am
https://www.apfeltalk.de/magazin/news/apple-plant-bereits-die-naechste-generation-macbook-pro-mit-m4-chip/
#Mac #News #M3Chip #IOS18 #Technologieentwicklung #A18Chip #MacOS15 #NeuralEngine #Apple #M4Chip #MacBookPro #Innovation -
Can someone point me to a technical deep dive on the Apple Neural Engine? I cannot find anything remotely as detailed as I would like, in searching.
#Apple #AppleSilicon #M1 #M2 #M3 #NeuralEngine #AI #ARM #tech #technical #analysis #macOS #iOS
-
👉 Apple annuncia il nuovo MacBook Air con processore M3
Il nuovo MacBook Air con M3 lanciato oggi combina design elegante, potenza notevole grazie al chip M3, e una notevole autonomia fino a 18 ore.https://gomoot.com/apple-annuncia-il-nuovo-macbook-air-con-processore-m3
#Apple @Apple #ips #LiquidRetina #M3 #MacBookAir #NeuralEngine #TrueTone