#llm — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #llm, aggregated by home.social.
-
One does not understand code copied from the #LLM. It's the same as pasting from a textbook or from colleague at uni: you THINK you understand it but you don't.
We fool ourself and say "yeah I *COULD* write it on my own"
Copying the solution to a math problem from someone who solved it is dangerous because "yeah I understand it" is not "I learned this to do it myself".
That's why learning groups with diverse skill level are so dangerous. -
One does not understand code copied from the #LLM. It's the same as pasting from a textbook or from colleague at uni: you THINK you understand it but you don't.
We fool ourself and say "yeah I *COULD* write it on my own"
Copying the solution to a math problem from someone who solved it is dangerous because "yeah I understand it" is not "I learned this to do it myself".
That's why learning groups with diverse skill level are so dangerous. -
One does not understand code copied from the #LLM. It's the same as pasting from a textbook or from colleague at uni: you THINK you understand it but you don't.
We fool ourself and say "yeah I *COULD* write it on my own"
Copying the solution to a math problem from someone who solved it is dangerous because "yeah I understand it" is not "I learned this to do it myself".
That's why learning groups with diverse skill level are so dangerous. -
One does not understand code copied from the #LLM. It's the same as pasting from a textbook or from colleague at uni: you THINK you understand it but you don't.
We fool ourself and say "yeah I *COULD* write it on my own"
Copying the solution to a math problem from someone who solved it is dangerous because "yeah I understand it" is not "I learned this to do it myself".
That's why learning groups with diverse skill level are so dangerous. -
One does not understand code copied from the #LLM. It's the same as pasting from a textbook or from colleague at uni: you THINK you understand it but you don't.
We fool ourself and say "yeah I *COULD* write it on my own"
Copying the solution to a math problem from someone who solved it is dangerous because "yeah I understand it" is not "I learned this to do it myself".
That's why learning groups with diverse skill level are so dangerous. -
ローカルAIを利用したApple Silicon Mac用データ検索&ランチャーアプリ「Vector」が期間限定で無料化。
https://applech2.com/archives/20260514-vector-intelligent-search-for-macos.html#applech2 #仕事効率化 #Intelligent_Search_for_macOS_Vector #LLM #AI #macOS #Mac #アプリ #Spotlight #レビュー
-
ローカルAIを利用したApple Silicon Mac用データ検索&ランチャーアプリ「Vector」が期間限定で無料化。
https://applech2.com/archives/20260514-vector-intelligent-search-for-macos.html#applech2 #仕事効率化 #Intelligent_Search_for_macOS_Vector #LLM #AI #macOS #Mac #アプリ #Spotlight #レビュー
-
ローカルAIを利用したApple Silicon Mac用データ検索&ランチャーアプリ「Vector」が期間限定で無料化。
https://applech2.com/archives/20260514-vector-intelligent-search-for-macos.html#applech2 #仕事効率化 #Intelligent_Search_for_macOS_Vector #LLM #AI #macOS #Mac #アプリ #Spotlight #レビュー
-
ローカルAIを利用したApple Silicon Mac用データ検索&ランチャーアプリ「Vector」が期間限定で無料化。
https://applech2.com/archives/20260514-vector-intelligent-search-for-macos.html#applech2 #仕事効率化 #Intelligent_Search_for_macOS_Vector #LLM #AI #macOS #Mac #アプリ #Spotlight #レビュー
-
LLM Witch Hunts are getting F'in Irritating
https://write.as/shantnu/llm-witch-hunts-are-getting-really-fin-irritating
-
LLM Witch Hunts are getting F'in Irritating
https://write.as/shantnu/llm-witch-hunts-are-getting-really-fin-irritating
-
LLM Witch Hunts are getting F'in Irritating
https://write.as/shantnu/llm-witch-hunts-are-getting-really-fin-irritating
-
LLM Witch Hunts are getting F'in Irritating
https://write.as/shantnu/llm-witch-hunts-are-getting-really-fin-irritating
-
LLM Witch Hunts are getting F'in Irritating
https://write.as/shantnu/llm-witch-hunts-are-getting-really-fin-irritating
-
→ Friends Don't Let Friends Use Ollama
https://sleepingrobots.com/dreams/stop-using-ollama/“#Ollama’s entire inference capability comes from llama.cpp, the C++ #inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer #laptops at all, he hacked together the first version in an evening, and it kicked off the entire #local LLM movement. […] It’s truly #community-driven, #MIT-licensed, and under active development with 450+ #contributors.”
-
→ Friends Don't Let Friends Use Ollama
https://sleepingrobots.com/dreams/stop-using-ollama/“#Ollama’s entire inference capability comes from llama.cpp, the C++ #inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer #laptops at all, he hacked together the first version in an evening, and it kicked off the entire #local LLM movement. […] It’s truly #community-driven, #MIT-licensed, and under active development with 450+ #contributors.”
-
→ Friends Don't Let Friends Use Ollama
https://sleepingrobots.com/dreams/stop-using-ollama/“#Ollama’s entire inference capability comes from llama.cpp, the C++ #inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer #laptops at all, he hacked together the first version in an evening, and it kicked off the entire #local LLM movement. […] It’s truly #community-driven, #MIT-licensed, and under active development with 450+ #contributors.”
-
→ Friends Don't Let Friends Use Ollama
https://sleepingrobots.com/dreams/stop-using-ollama/“#Ollama’s entire inference capability comes from llama.cpp, the C++ #inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer #laptops at all, he hacked together the first version in an evening, and it kicked off the entire #local LLM movement. […] It’s truly #community-driven, #MIT-licensed, and under active development with 450+ #contributors.”
-
→ Friends Don't Let Friends Use Ollama
https://sleepingrobots.com/dreams/stop-using-ollama/“#Ollama’s entire inference capability comes from llama.cpp, the C++ #inference engine created by Georgi Gerganov in March 2023. Gerganov’s project is what made it possible to run LLaMA models on consumer #laptops at all, he hacked together the first version in an evening, and it kicked off the entire #local LLM movement. […] It’s truly #community-driven, #MIT-licensed, and under active development with 450+ #contributors.”
-
AI: Not Conscious, but UNPREDICTABLE!
AI: Not Conscious, but UNPREDICTABLE makes for a dangerous cocktail!
AI is obviously NOT conscious. To be con-scious one needs something ELSE to go with the science... At least etymologically speaking. But the science of AI fully explains AI, there is no need for anything else to go with the knowledge. Present AI, Large Language Models, LLM, all about finding gradients in extremely high dimensional spaces
Another type of AI is possible: WMs: World Models, where the AI basically learns physics. WMs will be superior to LLMs for most applications beyond Internet searches.
Consciousness requires the feeling of existing: present day AI do not have this capability, all the more because we do not know what it consists of.
The smallest known structures in the brain are Dendritic Nano Tubes, DNTs, and they are of the order of 100 nanometers across… 1000 atoms wide (molecules causing Alzheimer squeeze through them). At a scale ten times smaller, Quantum effects will appear. It is likely that they will play a crucial role in consciousness. Why? Because at this sort of scale, Quantum effects are going to appear, first of all (there are libraries of geometries and the Casimir Effect they bring… At the ten nano scale, so not only will there be Quantum effects, but Quantum Field Theory effects, and the dynamics of vacuum… Turns out the soul’s material is vacuum energy… Something like that is looming… It’s curious that QFT is not vibrating more in the collective consciousness… Obviously a problem with the High Energy physicists’ lack of elocution or imagination…)
Present day AI rests on electronics which work like canal systems (canals carrying electrons, not water molecules)… Quantum effects caused by said electrons are deliberately obliterated so that “semiconductors” can behave like classical canal systems.
The Quantum Computer, QC, is the exact opposite: far from avoiding Quantum effects, they are maximally exploited to compute with. The QC rests on entanglement, nonlocality and a new property called “magic” (basically what classical entanglement can’t duplicate)..
If Twenty-first Century technology can do it, it would be naive to believe that evolution did not get there first, especially since Quantum effects have been shown to play a direct role in neurology already (for example in birds seeing the Earth magnetic field).
Thus, if we want consciousness, we need the QC.
QC will enable us to create AC, Artificial Consciousness.
***
A different question is whether present AI can SIMULATE consciousness. The answer is obviously yes. Confusing this simulation with a creature with (Quantum) consciousness is the AI DELUSION. Dawkins who wrote a book called the God Delusion is himself suffering from an arguably worse condition, the AI Delusion. It’s worse because AI as we have it, is just classical canal engineering and smart high dimension calculus… There is no mystery whatsoever. .
***
Even though the AI we have now are not conscious, and can’t be conscious, they are still unpredictable. I actually have a mathematical PROOF of this (using mathematics and logics from the 1870-1940 period…).
That makes uncontrolled AI a potential master of humanity.
Lest we be very smart and careful…
Patrice Ayme
Coming soon to a neighborhood near you… Robot contemplates Valles Marineris on the equator of Mars, 10 kilometers deep… The colonization of Mars will be initially driven by AI… Next NASA Mars mission will be propelled by a nuclear engine, and three AI helicopters will fly away with supersonic blades before hitting the ground… And communicate with Earth through overhead satellites…
#AI #ArtificialConsciousness #ArtificialIntelligence #Consciousness #llm #Philosophy #QuantumComputer #RichardDawkins #spirituality #Unpredictable #WorldModel -
AI: Not Conscious, but UNPREDICTABLE!
AI: Not Conscious, but UNPREDICTABLE makes for a dangerous cocktail!
AI is obviously NOT conscious. To be con-scious one needs something ELSE to go with the science... At least etymologically speaking. But the science of AI fully explains AI, there is no need for anything else to go with the knowledge. Present AI, Large Language Models, LLM, all about finding gradients in extremely high dimensional spaces
Another type of AI is possible: WMs: World Models, where the AI basically learns physics. WMs will be superior to LLMs for most applications beyond Internet searches.
Consciousness requires the feeling of existing: present day AI do not have this capability, all the more because we do not know what it consists of.
The smallest known structures in the brain are Dendritic Nano Tubes, DNTs, and they are of the order of 100 nanometers across… 1000 atoms wide (molecules causing Alzheimer squeeze through them). At a scale ten times smaller, Quantum effects will appear. It is likely that they will play a crucial role in consciousness. Why? Because at this sort of scale, Quantum effects are going to appear, first of all (there are libraries of geometries and the Casimir Effect they bring… At the ten nano scale, so not only will there be Quantum effects, but Quantum Field Theory effects, and the dynamics of vacuum… Turns out the soul’s material is vacuum energy… Something like that is looming… It’s curious that QFT is not vibrating more in the collective consciousness… Obviously a problem with the High Energy physicists’ lack of elocution or imagination…)
Present day AI rests on electronics which work like canal systems (canals carrying electrons, not water molecules)… Quantum effects caused by said electrons are deliberately obliterated so that “semiconductors” can behave like classical canal systems.
The Quantum Computer, QC, is the exact opposite: far from avoiding Quantum effects, they are maximally exploited to compute with. The QC rests on entanglement, nonlocality and a new property called “magic” (basically what classical entanglement can’t duplicate)..
If Twenty-first Century technology can do it, it would be naive to believe that evolution did not get there first, especially since Quantum effects have been shown to play a direct role in neurology already (for example in birds seeing the Earth magnetic field).
Thus, if we want consciousness, we need the QC.
QC will enable us to create AC, Artificial Consciousness.
***
A different question is whether present AI can SIMULATE consciousness. The answer is obviously yes. Confusing this simulation with a creature with (Quantum) consciousness is the AI DELUSION. Dawkins who wrote a book called the God Delusion is himself suffering from an arguably worse condition, the AI Delusion. It’s worse because AI as we have it, is just classical canal engineering and smart high dimension calculus… There is no mystery whatsoever. .
***
Even though the AI we have now are not conscious, and can’t be conscious, they are still unpredictable. I actually have a mathematical PROOF of this (using mathematics and logics from the 1870-1940 period…).
That makes uncontrolled AI a potential master of humanity.
Lest we be very smart and careful…
Patrice Ayme
Coming soon to a neighborhood near you… Robot contemplates Valles Marineris on the equator of Mars, 10 kilometers deep… The colonization of Mars will be initially driven by AI… Next NASA Mars mission will be propelled by a nuclear engine, and three AI helicopters will fly away with supersonic blades before hitting the ground… And communicate with Earth through overhead satellites…
#AI #ArtificialConsciousness #ArtificialIntelligence #Consciousness #llm #Philosophy #QuantumComputer #RichardDawkins #spirituality #Unpredictable #WorldModel -
AI: Not Conscious, but UNPREDICTABLE!
AI: Not Conscious, but UNPREDICTABLE makes for a dangerous cocktail!
AI is obviously NOT conscious. To be con-scious one needs something ELSE to go with the science... At least etymologically speaking. But the science of AI fully explains AI, there is no need for anything else to go with the knowledge. Present AI, Large Language Models, LLM, all about finding gradients in extremely high dimensional spaces
Another type of AI is possible: WMs: World Models, where the AI basically learns physics. WMs will be superior to LLMs for most applications beyond Internet searches.
Consciousness requires the feeling of existing: present day AI do not have this capability, all the more because we do not know what it consists of.
The smallest known structures in the brain are Dendritic Nano Tubes, DNTs, and they are of the order of 100 nanometers across… 1000 atoms wide (molecules causing Alzheimer squeeze through them). At a scale ten times smaller, Quantum effects will appear. It is likely that they will play a crucial role in consciousness. Why? Because at this sort of scale, Quantum effects are going to appear, first of all (there are libraries of geometries and the Casimir Effect they bring… At the ten nano scale, so not only will there be Quantum effects, but Quantum Field Theory effects, and the dynamics of vacuum… Turns out the soul’s material is vacuum energy… Something like that is looming… It’s curious that QFT is not vibrating more in the collective consciousness… Obviously a problem with the High Energy physicists’ lack of elocution or imagination…)
Present day AI rests on electronics which work like canal systems (canals carrying electrons, not water molecules)… Quantum effects caused by said electrons are deliberately obliterated so that “semiconductors” can behave like classical canal systems.
The Quantum Computer, QC, is the exact opposite: far from avoiding Quantum effects, they are maximally exploited to compute with. The QC rests on entanglement, nonlocality and a new property called “magic” (basically what classical entanglement can’t duplicate)..
If Twenty-first Century technology can do it, it would be naive to believe that evolution did not get there first, especially since Quantum effects have been shown to play a direct role in neurology already (for example in birds seeing the Earth magnetic field).
Thus, if we want consciousness, we need the QC.
QC will enable us to create AC, Artificial Consciousness.
***
A different question is whether present AI can SIMULATE consciousness. The answer is obviously yes. Confusing this simulation with a creature with (Quantum) consciousness is the AI DELUSION. Dawkins who wrote a book called the God Delusion is himself suffering from an arguably worse condition, the AI Delusion. It’s worse because AI as we have it, is just classical canal engineering and smart high dimension calculus… There is no mystery whatsoever. .
***
Even though the AI we have now are not conscious, and can’t be conscious, they are still unpredictable. I actually have a mathematical PROOF of this (using mathematics and logics from the 1870-1940 period…).
That makes uncontrolled AI a potential master of humanity.
Lest we be very smart and careful…
Patrice Ayme
Coming soon to a neighborhood near you… Robot contemplates Valles Marineris on the equator of Mars, 10 kilometers deep… The colonization of Mars will be initially driven by AI… Next NASA Mars mission will be propelled by a nuclear engine, and three AI helicopters will fly away with supersonic blades before hitting the ground… And communicate with Earth through overhead satellites…
#AI #ArtificialConsciousness #ArtificialIntelligence #Consciousness #llm #Philosophy #QuantumComputer #RichardDawkins #spirituality #Unpredictable #WorldModel -
AI: Not Conscious, but UNPREDICTABLE!
AI: Not Conscious, but UNPREDICTABLE makes for a dangerous cocktail!
AI is obviously NOT conscious. To be con-scious one needs something ELSE to go with the science... At least etymologically speaking. But the science of AI fully explains AI, there is no need for anything else to go with the knowledge. Present AI, Large Language Models, LLM, all about finding gradients in extremely high dimensional spaces
Another type of AI is possible: WMs: World Models, where the AI basically learns physics. WMs will be superior to LLMs for most applications beyond Internet searches.
Consciousness requires the feeling of existing: present day AI do not have this capability, all the more because we do not know what it consists of.
The smallest known structures in the brain are Dendritic Nano Tubes, DNTs, and they are of the order of 100 nanometers across… 1000 atoms wide (molecules causing Alzheimer squeeze through them). At a scale ten times smaller, Quantum effects will appear. It is likely that they will play a crucial role in consciousness. Why? Because at this sort of scale, Quantum effects are going to appear, first of all (there are libraries of geometries and the Casimir Effect they bring… At the ten nano scale, so not only will there be Quantum effects, but Quantum Field Theory effects, and the dynamics of vacuum… Turns out the soul’s material is vacuum energy… Something like that is looming… It’s curious that QFT is not vibrating more in the collective consciousness… Obviously a problem with the High Energy physicists’ lack of elocution or imagination…)
Present day AI rests on electronics which work like canal systems (canals carrying electrons, not water molecules)… Quantum effects caused by said electrons are deliberately obliterated so that “semiconductors” can behave like classical canal systems.
The Quantum Computer, QC, is the exact opposite: far from avoiding Quantum effects, they are maximally exploited to compute with. The QC rests on entanglement, nonlocality and a new property called “magic” (basically what classical entanglement can’t duplicate)..
If Twenty-first Century technology can do it, it would be naive to believe that evolution did not get there first, especially since Quantum effects have been shown to play a direct role in neurology already (for example in birds seeing the Earth magnetic field).
Thus, if we want consciousness, we need the QC.
QC will enable us to create AC, Artificial Consciousness.
***
A different question is whether present AI can SIMULATE consciousness. The answer is obviously yes. Confusing this simulation with a creature with (Quantum) consciousness is the AI DELUSION. Dawkins who wrote a book called the God Delusion is himself suffering from an arguably worse condition, the AI Delusion. It’s worse because AI as we have it, is just classical canal engineering and smart high dimension calculus… There is no mystery whatsoever. .
***
Even though the AI we have now are not conscious, and can’t be conscious, they are still unpredictable. I actually have a mathematical PROOF of this (using mathematics and logics from the 1870-1940 period…).
That makes uncontrolled AI a potential master of humanity.
Lest we be very smart and careful…
Patrice Ayme
Coming soon to a neighborhood near you… Robot contemplates Valles Marineris on the equator of Mars, 10 kilometers deep… The colonization of Mars will be initially driven by AI… Next NASA Mars mission will be propelled by a nuclear engine, and three AI helicopters will fly away with supersonic blades before hitting the ground… And communicate with Earth through overhead satellites…
#AI #ArtificialConsciousness #ArtificialIntelligence #Consciousness #llm #Philosophy #QuantumComputer #RichardDawkins #spirituality #Unpredictable #WorldModel -
Regex vs. LLM for B2B document extraction. This week, I tried out both.
:blobcoffee: The rule-based pipeline with pytesseract + regex worked perfectly for Layout A. For Layout B? Every single field returned None.
:blobcoffee: Because "PO Number" and "Order Reference" are the same thing for a human. Not for a regex pattern.
:blobcoffee: The LLM-based approach (pytesseract + Ollama + LLaMA 3) extracted both layouts correctly, without touching a single rule. It even normalized the date format automatically.
:blobcoffee: But LLMs aren't always the right answer. If your documents are stable, speed matters at scale, or explainability is required, regex might still win.
Full comparison with code and trade-off breakdown on TDS: https://shorturl.at/v4gdl
#Python #DataScience #business #technology #dataengineering #LLM #Automation #OCR
-
Regex vs. LLM for B2B document extraction. This week, I tried out both.
:blobcoffee: The rule-based pipeline with pytesseract + regex worked perfectly for Layout A. For Layout B? Every single field returned None.
:blobcoffee: Because "PO Number" and "Order Reference" are the same thing for a human. Not for a regex pattern.
:blobcoffee: The LLM-based approach (pytesseract + Ollama + LLaMA 3) extracted both layouts correctly, without touching a single rule. It even normalized the date format automatically.
:blobcoffee: But LLMs aren't always the right answer. If your documents are stable, speed matters at scale, or explainability is required, regex might still win.
Full comparison with code and trade-off breakdown on TDS: https://shorturl.at/v4gdl
#Python #DataScience #business #technology #dataengineering #LLM #Automation #OCR
-
Regex vs. LLM for B2B document extraction. This week, I tried out both.
:blobcoffee: The rule-based pipeline with pytesseract + regex worked perfectly for Layout A. For Layout B? Every single field returned None.
:blobcoffee: Because "PO Number" and "Order Reference" are the same thing for a human. Not for a regex pattern.
:blobcoffee: The LLM-based approach (pytesseract + Ollama + LLaMA 3) extracted both layouts correctly, without touching a single rule. It even normalized the date format automatically.
:blobcoffee: But LLMs aren't always the right answer. If your documents are stable, speed matters at scale, or explainability is required, regex might still win.
Full comparison with code and trade-off breakdown on TDS: https://shorturl.at/v4gdl
#Python #DataScience #business #technology #dataengineering #LLM #Automation #OCR
-
Regex vs. LLM for B2B document extraction. This week, I tried out both.
:blobcoffee: The rule-based pipeline with pytesseract + regex worked perfectly for Layout A. For Layout B? Every single field returned None.
:blobcoffee: Because "PO Number" and "Order Reference" are the same thing for a human. Not for a regex pattern.
:blobcoffee: The LLM-based approach (pytesseract + Ollama + LLaMA 3) extracted both layouts correctly, without touching a single rule. It even normalized the date format automatically.
:blobcoffee: But LLMs aren't always the right answer. If your documents are stable, speed matters at scale, or explainability is required, regex might still win.
Full comparison with code and trade-off breakdown on TDS: https://shorturl.at/v4gdl
#Python #DataScience #business #technology #dataengineering #LLM #Automation #OCR
-
Regex vs. LLM for B2B document extraction. This week, I tried out both.
:blobcoffee: The rule-based pipeline with pytesseract + regex worked perfectly for Layout A. For Layout B? Every single field returned None.
:blobcoffee: Because "PO Number" and "Order Reference" are the same thing for a human. Not for a regex pattern.
:blobcoffee: The LLM-based approach (pytesseract + Ollama + LLaMA 3) extracted both layouts correctly, without touching a single rule. It even normalized the date format automatically.
:blobcoffee: But LLMs aren't always the right answer. If your documents are stable, speed matters at scale, or explainability is required, regex might still win.
Full comparison with code and trade-off breakdown on TDS: https://shorturl.at/v4gdl
#Python #DataScience #business #technology #dataengineering #LLM #Automation #OCR