#historyofai — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #historyofai, aggregated by home.social.
-
So, what did we learn in last week's lecture?
(1) The bounty log (history of AI)
(2) Symbolic vs subsymbolic (The two schools)
(3) The mechanics of the chase (ML types)
(4) The black box evaluation
Stay tuned for this weeks lecture on traditional ML technologies (k-Means, linear regression, decision trees)#AI #machinelearning #cowboybebop #HistoryOfAI #modelEvaluation @fiz_karlsruhe @fizise #lecture #KDAI2026
-
So, what did we learn in last week's lecture?
(1) The bounty log (history of AI)
(2) Symbolic vs subsymbolic (The two schools)
(3) The mechanics of the chase (ML types)
(4) The black box evaluation
Stay tuned for this weeks lecture on traditional ML technologies (k-Means, linear regression, decision trees)#AI #machinelearning #cowboybebop #HistoryOfAI #modelEvaluation @fiz_karlsruhe @fizise #lecture #KDAI2026
-
So, what did we learn in last week's lecture?
(1) The bounty log (history of AI)
(2) Symbolic vs subsymbolic (The two schools)
(3) The mechanics of the chase (ML types)
(4) The black box evaluation
Stay tuned for this weeks lecture on traditional ML technologies (k-Means, linear regression, decision trees)#AI #machinelearning #cowboybebop #HistoryOfAI #modelEvaluation @fiz_karlsruhe @fizise #lecture #KDAI2026
-
So, what did we learn in last week's lecture?
(1) The bounty log (history of AI)
(2) Symbolic vs subsymbolic (The two schools)
(3) The mechanics of the chase (ML types)
(4) The black box evaluation
Stay tuned for this weeks lecture on traditional ML technologies (k-Means, linear regression, decision trees)#AI #machinelearning #cowboybebop #HistoryOfAI #modelEvaluation @fiz_karlsruhe @fizise #lecture #KDAI2026
-
So, what did we learn in last week's lecture?
(1) The bounty log (history of AI)
(2) Symbolic vs subsymbolic (The two schools)
(3) The mechanics of the chase (ML types)
(4) The black box evaluation
Stay tuned for this weeks lecture on traditional ML technologies (k-Means, linear regression, decision trees)#AI #machinelearning #cowboybebop #HistoryOfAI #modelEvaluation @fiz_karlsruhe @fizise #lecture #KDAI2026
-
After today's #KDAI2026 lecture on "Basic Machine Learning 01", you will understand why most of what people say about AI in public debate is either wrong, confused, or missing the point — and you will have the tools to do better. If you've ever wondered how Netflix predicts your next binge-watch or how self-driving cars navigate, this is where it all starts.
@fizise #AI #machinelearning #DeepLearning #transformers #llms #ontologies #historyofAI #lecture #StudentLife #FutureTech #STEM
-
The average chess players of Bletchley Park and AI research in Britain
#HackerNews #averagechessplayers #BletchleyPark #AIresearch #Britain #historyofAI #chessandtech
-
Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
-
Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
-
Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
-
Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI.
F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA
#NLP #LanguageModels #HistoryOfAI #TextProcessing #AI #historyofscience #ISE2025 @fizise @fiz_karlsruhe @tabea @enorouzi @sourisnumerique
-
Next stop in our NLP timeline are the (mostly) futile tries of machine translation during the cold war era. The rule-based machine translation approach was used mostly in the creation of dictionaries and grammar programs. It’s major drawback was that absolutely everything had to be made explicit.
#nlp #historyofscience #ise2025 #lecture #machinetranslation #coldwar #AI #historyofAI @tabea @enorouzi @sourisnumerique @fiz_karlsruhe @fizise
-
Summarizing our very brief #HistoryOfAI which was published here for several weeks in a series of toots , let's have a look at the popularity dynamics of symbolic vs subsymbolic AI put into perspective with historical AI hay-days and winters via the Google ngram viewer.
https://books.google.com/ngrams/graph?content=ontology%2Cneural+network%2Cmachine+learning%2Cexpert+system&year_start=1955&year_end=2022&corpus=en&smoothing=3&case_insensitive=false#ISE2024 #AI #ontologies #machinelearning #neuralnetworks #llms @fizise @sourisnumerique @enorouzi #semanticweb #knowledgegraphs
-
In 2022 with the advent of ChatGPT, large language models and AI in general gained an unprecedented popularity. It combined InstructGPT, a GPT-3 model complemented and fine-tuned with reinforcement learning feedback, Codex text2code, plus a massive engineering effort.
N. Lambert, et al. (2022). Illustrating Reinforcement Learning from Human Feedback (RLHF). https://huggingface.co/blog/rlhf
#HistoryOfAI #AI #ISE2024 @fizise @sourisnumerique @enorouzi #llm #gpt #llms
-
Higher, faster, farther... in 2021 Generative AI gains momentum with the advent of DaLL-E, a GPT-3 based zero-shot text2image model, and other major milestones, as e.g., GitHub CoPilot, Open AI Codex, WebGPT, and Google LaMDA.
Codex: Chen, M., et al. (2021). Evaluating Large Language Models Trained on Code, https://arxiv.org/abs/2107.03374
DaLL-E: Ramesh, A.et al. (2021). Zero-Shot Text-to-Image Generation, https://arxiv.org/abs/2107.03374#HistoryOfAI #AI #ISE2024 @fizise @sourisnumerique @enorouzi #llm #gpt
-
In 2020, GPT-3 was released by OpenAI, based on 45TB data crawled from the web. A “data quality” predictor was trained to boil down the training data to 550GB “high quality” data. Learning from the prompt (few-shot learning) was also introduced.
T. B. Brown et al. (2020). Language models are few-shot learners. NIPS 2020, pp.1877–1901. https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
#HistoryOfAI #AI #ISE2024 #llms #gpt #lecture @enorouzi @sourisnumerique @fizise
-
In 2019, OpenAI released GPT-2 as a direct scale-up of GPT, comprising 1.5B parameters and trained on 8M web pages.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners.
https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
OpenAI blog post: https://openai.com/index/better-language-models/
GPT-2 on HuggingFace: https://huggingface.co/openai-community/gpt2#HistoryOfAI #AI #llm #ISE2024 @fizise @enorouzi @sourisnumerique #gpt
-
In 2018, Generative Pre-trained Transformers (GPT, by OpenAI) and Bidirectional Encoder Representations from Transformers (BERT, by Google) are introduced.
Radford, A. et al (2018). Improving language understanding by generative pre-training, https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf
J. Devlin et al (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, ACL 2019, https://aclanthology.org/N19-1423
#HistoryOfAI #ISE2024 #AI #llm @fizise @enorouzi @sourisnumerique
-
In 2014 Attention mechanisms were introduced by Bahdanau, Cho, and Bengio, which allow models to selectively focus on specific parts of the input. In 2017, the Transformer model introduced by Ashish Vaswani et al. followed, which learns to encode and decode sequential information especially effective for tasks like machine translation and #NLP.
Attention: https://arxiv.org/pdf/1409.0473
Transformers: https://arxiv.org/pdf/1706.03762#HistoryOfAI #AI #ISE2024 @fizise @sourisnumerique @enorouzi #transformers
-
In 2013, Mikolov et al. (from Google) published word2vec, a neural network based framework to learn distributed representations of words as dense vectors in continuous space, aka word embeddings.
T. Mikolov et al. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781
https://arxiv.org/abs/1301.3781#HistoryOfAI #AI #ise2024 #lecture #distributionalsemantics #wordembeddings #embeddings @sourisnumerique @enorouzi @fizise
-
In 1999, NVIDIA introduced the first Graphical Processing Unit (GPU) card Nvidia Geforce 256, enabling an unprecedented speedup for parallel computations as required for machine learning. This innovation paved the way for the rapid advancement of deep learning algorithms.
John Peddie, Famous Graphics Chips: Nvidia’s GeForce 256, IEEE Computer Society.
https://www.computer.org/publications/tech-news/chasing-pixels/nvidias-geforce-256#HistoryOfAI #ISE2024 #AI #deeplearning #machinelearning #lecture @sourisnumerique @enorouzi @fizise @fiz_karlsruhe
-
In 1996, Long Short-Term Memory (LSTM) Recurrent Neural Networks are introduced by Sepp Hochreiter and Jürgen Schmidhuber, which efficiently enabled #neuralnetworks to process sequences of data (instead of single data points) being able to learn from data and to generate text.
Hochreiter, Sepp; Schmidhuber, Juergen (1996). LSTM can solve hard long time lag problems. Advances in NIPS, pp. 473–479.
https://dl.acm.org/doi/10.5555/2998981.2999048#HistoryOfAI #AI #ISE2024 #lecture @sourisnumerique @enorouzi @fizise
-
In 1994, Tim Berners-Lee introduced the #SemanticWeb in his plenary presentation at the 1st WWW conference in Geneva, Switzerland.
“I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web – the content, links, and transactions between people and computers. A Semantic Web, which makes this possible, has yet to emerge..."
Slides from TBL, 1994: https://www.w3.org/Talks/WWW94Tim/
#HistoryOfAI #AI #knowledgegraphs #lexture #ISE2024 @sourisnumerique @enorouzi
-
In 1968, Terry Winograd introduced SHRDLU, a natural language understanding agent that was able to plan and execute directives in rudimentary 'block world'. In particular, SHRDLU emphasized the importance of user-friendly interfaces for HCI.
T. Winograd (1970). Procedures as a Representation for Data in a Computer Program for Understanding Natural Language", MIT AI Technical Report 235. https://web.archive.org/web/20201003212106/http://dspace.mit.edu/bitstream/handle/1721.1/7095/AITR-235.pdf
#HistoryOfAI #AI #lecture #ISE2024 @fiz_karlsruhe @sourisnumerique @enorouzi
-
In 1965,
Dendral, one of the first expert systems, was introduced by Edward Feigenbaum, Joshua Lederberg, and Carl Djerassi. It was was supposed to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using a chemistry knowledge base.http://web.mit.edu/6.034/www/6.s966/dendral-history.pdf
#HistoryOfAI #ISE2024 #lecture #AI #chemistry #expertsystem @fizise @enorouzi @sourisnumerique
-
During Cold War, rule-based machine translation from English to Russian and vice versa was a hot topic. However, for translating languages with rules, you have to explicitly cover an innumerable amount of exceptions. Thus, government funding for Machine Translation was cut in 1966, leading to the first AI winter.
W.J. Hutchins (1985) Machine Translation: Past, Present, and Future, Longman. p.5
https://archive.org/details/machinetranslati0000unse_q9u2#AI #HistoryOfAI #ISE2024 @sourisnumerique @enorouzi @fizise #lecture
-
Two millennia after Aristotle, Gottfried Wilhelm Leibniz adopted the idea to represent knowledge with a (mathematical) universal language and proposed the calculus ratiocinator to reason over this knowledge.
G. W. Leibniz (1676), De arte characteristica ad perficiendas scientias ratione nitentes https://www.uni-muenster.de/Leibniz/DatenVI4/VI4a2.pdf#page=401
lecture slides: https://docs.google.com/presentation/d/1SL4CpG0jMPaCXXyqLXmuDXTn_JResItKET8PKnHlbrE/edit?usp=drive_link
#HistoryOfAI #AI #ISE2024 #lecture @fizise @enorouzi @sourisnumerique #leibniz #philosophy #calculemus
-
Knowledge Representation and Symbolic Reasoning as another AI discipline are much older than machine learning. Already in the 4th century BCE greek philosopher Aristotle suggested ten universal categories under which to place every object of human apprehension.
Studtmann, P.. Aristotle's Categories. In Zalta, E.N. (ed.). Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/aristotle-categories/
#HistoryOfAI #AI #ISE2024 #knowledgerepresentation #symbolicAI #philosophy @sourisnumerique @enorouzi @fizise
-
AI as a scientific discipline started with the 1956 Dartmouth Summer Workshop initiated by John McCarthy together with Marvin Minsky, Allen Newell, Herbert Simon, and others.
"An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
Proposal for the Darrtmouth Summer Project on #AI: https://raysolomonoff.com/dartmouth/boxa/dart564props.pdf#HistoryOfAI #AI #ISE2024 #lecture @fizise @sourisnumerique @enorouzi
-
After the first successes of AI research in the late 1950s - 60s, the media and even scientists were rather enthusiastic with their prognosis on what's next. Even Marvin Minsky predicted in 1970 "..in from three to eight years we will have a machine with the general intelligence of an average human being".
So, are we all doomed? - or do we simply have the tendency to overestimate technology...#AI #HistoryOfAi #ISE2024 #singularity #GAI #hal9000 @fizise @enorouzi @sourisnumerique #aiart
-
In 1957, the Mark I Perceptron, developed by Frank Rosenblatt at Cornell Aeronautical Laboratory, was able to learn and to recognize handwritten digits, read via a simple 20x20 array of photocells, adapting the weights of the perceptron via potentiometers and small electric motors.
https://en.wikipedia.org/wiki/Perceptron#Mark_I_Perceptron_machine
#HistoryOfAI #ISE2024 #lecture #neuralnetworks #connectionism @fizise @enorouzi @sourisnumerique
-
Iteratively adjusting the weights of a neuron according to the errors created by comparing expected output with actual output is the basis of Frank Rosenblatt's perceptron (1957), the first artificial neural network.
F. Rosenblatt (1958), The perceptron: a probabilistic model for information storage and organization in the brain. Psyc. Review, 65(6), 386–408.
https://doi.org/10.1037/h0042519#HistoryOfAI #ISE2024 #neuralnetwork #AI #lecture @sourisnumerique @enorouzi @fizise #timeline #connectionism
-
A (simplified) mathematical model of the neuron and its biological function was suggested by W. S. McCulloch and W. Pitts in 1943. At its core was a binary threshold function with which the artificial neurons were able to emulate also basic boolean functions.
W.S. McCulloch, W. Pitts: A logical calculus of the ideas immanent in nervous activity. In: Bulletin of Math. Biophysics, vol.5(1943), p. 115–133
#HistoryOfAI #ise2024 #ISE2024 #AI #neuralnetworks #lecture @sourisnumerique @enorouzi
-
We start our #HistoryOfAI with Donald Hebb's efforts to investigate the principles of biological neural networks.
Hebb’s Law: "Neurons that fire together wire together."Hebb, D. O. (1949). The organization of behavior: A neuropsychological theory. New York: Wiley.
https://pure.mpg.de/rest/items/item_2346268_3/component/file_2346267/content#ISE2024 #AI #Connectionism #neuralnetworks #lecture @sourisnumerique @enorouzi @heikef @NFDI4DS @nfdi4culture @fizise @fiz_karlsruhe
-
We started our #ISE2024 lecture on Basic Machine Learning today with "A (very) brief History of AI", which - in the end - took longer than expected ;-) ....stories and anecdotes time again
lecture slides: https://drive.google.com/file/d/1smo2qdVbXIVWqE9JHrtD6Ao6LDj1q9Jn/view?usp=drive_link
@enorouzi @sourisnumerique @fizise @fiz_karlsruhe #AI #HistoryOfAI #perceptron #expertsystem #knowledgebase #symbolicAI #subsymbolicAI #neuralnetworks #connectivism #generativeAI #creativeAI #AIart #astronaut
-
We started our #ISE2024 lecture on Basic Machine Learning today with "A (very) brief History of AI", which - in the end - took longer than expected ;-) ....stories and anecdotes time again
lecture slides: https://drive.google.com/file/d/1smo2qdVbXIVWqE9JHrtD6Ao6LDj1q9Jn/view?usp=drive_link
@enorouzi @sourisnumerique @fizise @fiz_karlsruhe #AI #HistoryOfAI #perceptron #expertsystem #knowledgebase #symbolicAI #subsymbolicAI #neuralnetworks #connectivism #generativeAI #creativeAI #AIart #astronaut
-
We started our #ISE2024 lecture on Basic Machine Learning today with "A (very) brief History of AI", which - in the end - took longer than expected ;-) ....stories and anecdotes time again
lecture slides: https://drive.google.com/file/d/1smo2qdVbXIVWqE9JHrtD6Ao6LDj1q9Jn/view?usp=drive_link
@enorouzi @sourisnumerique @fizise @fiz_karlsruhe #AI #HistoryOfAI #perceptron #expertsystem #knowledgebase #symbolicAI #subsymbolicAI #neuralnetworks #connectivism #generativeAI #creativeAI #AIart #astronaut
-
We started our #ISE2024 lecture on Basic Machine Learning today with "A (very) brief History of AI", which - in the end - took longer than expected ;-) ....stories and anecdotes time again
lecture slides: https://drive.google.com/file/d/1smo2qdVbXIVWqE9JHrtD6Ao6LDj1q9Jn/view?usp=drive_link
@enorouzi @sourisnumerique @fizise @fiz_karlsruhe #AI #HistoryOfAI #perceptron #expertsystem #knowledgebase #symbolicAI #subsymbolicAI #neuralnetworks #connectivism #generativeAI #creativeAI #AIart #astronaut
-
We started our #ISE2024 lecture on Basic Machine Learning today with "A (very) brief History of AI", which - in the end - took longer than expected ;-) ....stories and anecdotes time again
lecture slides: https://drive.google.com/file/d/1smo2qdVbXIVWqE9JHrtD6Ao6LDj1q9Jn/view?usp=drive_link
@enorouzi @sourisnumerique @fizise @fiz_karlsruhe #AI #HistoryOfAI #perceptron #expertsystem #knowledgebase #symbolicAI #subsymbolicAI #neuralnetworks #connectivism #generativeAI #creativeAI #AIart #astronaut
-
Next stop in our Brief History of (Large) #languagemodels is 2019: GPT-2 was released by OpenAI as a direct scale-up of GPT, comprising 1.5B parameters and trained on 8M web pages.
Slides (from #ise2023 lecture): https://drive.google.com/file/d/1atNvMYNkeKDwXP3olHXzloa09S5pzjXb/view?usp=drive_link
Paper: https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
#llm #llms #ai #artificialintelligence #generativeai #gpt #lecture #historyofAI -
“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right.”
Quoting Leibniz for last week's #ise2023 lecture with a brief #HistoryofAI
Slides: https://drive.google.com/file/d/18_xPnJDM04I7pDihMMhoFIuDshfUKMk9/view?usp=sharing#ai #artificialintelligence #lecture @fizise @KIT_Karlsruhe @ebrahim #calculemus #logics #leibniz #philosophy
-
In the #ise2023 lecture today, we are opening a new chapter on "Basic Machine Learning". This week you will learn about the history of AI, the types of AI and Machine Learning, the main challenges of ML and the general ML workflow.
Slides: https://drive.google.com/file/d/18_xPnJDM04I7pDihMMhoFIuDshfUKMk9/view?usp=sharing
#machinelearning #AI #artificialintelligence #lecture #deeplearning #historyofAI @fizise @enorouzi @KIT_Karlsruhe @nfdi4culture #stablediffusionart #creativeai #aiart