#repl — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #repl, aggregated by home.social.
-
Every time I want to leave #lisp or #scheme based language I stumble across awesome talk which reminds me why I love it in the first place.
https://fosdem.org/2026/schedule/event/HDE7JZ-lisp-is-clay/
Damn #repl driven development is so damn awesome. Now a days If I need to write some automation scripts I use #babashka lot instead of #bash or #python
-
Every time I want to leave #lisp or #scheme based language I stumble across awesome talk which reminds me why I love it in the first place.
https://fosdem.org/2026/schedule/event/HDE7JZ-lisp-is-clay/
Damn #repl driven development is so damn awesome. Now a days If I need to write some automation scripts I use #babashka lot instead of #bash or #python
-
Every time I want to leave #lisp or #scheme based language I stumble across awesome talk which reminds me why I love it in the first place.
https://fosdem.org/2026/schedule/event/HDE7JZ-lisp-is-clay/
Damn #repl driven development is so damn awesome. Now a days If I need to write some automation scripts I use #babashka lot instead of #bash or #python
-
С Vim удобно программировать (часть II)
Это вторая часть публикации. В первой мы разобрали основные настройки редактора, позволяющие сделать процесс набора программы более удобным. Здесь рассмотрим плагины и скрипты для запуска программ на разных языках из редактора Vim.
-
С Vim удобно программировать (часть II)
Это вторая часть публикации. В первой мы разобрали основные настройки редактора, позволяющие сделать процесс набора программы более удобным. Здесь рассмотрим плагины и скрипты для запуска программ на разных языках из редактора Vim.
-
С Vim удобно программировать (часть II)
Это вторая часть публикации. В первой мы разобрали основные настройки редактора, позволяющие сделать процесс набора программы более удобным. Здесь рассмотрим плагины и скрипты для запуска программ на разных языках из редактора Vim.
-
Почему я перестал писать bash-скрипты и написал свой язык
Время от времени мне нужно выполнить примитивный сценарий в терминале, но каждый раз это заканчивается очередным гуглежом «bash iterate each file» или «bash file has string». А что если скрипты в терминале можно было бы писать прямо как поток декларативных мыслей?
https://habr.com/ru/articles/1020728/
#скриптовый_язык #bash #функциональное_программирование #REPL #автоматизация #open_source #Rust #Lisp #Haskell
-
Почему я перестал писать bash-скрипты и написал свой язык
Время от времени мне нужно выполнить примитивный сценарий в терминале, но каждый раз это заканчивается очередным гуглежом «bash iterate each file» или «bash file has string». А что если скрипты в терминале можно было бы писать прямо как поток декларативных мыслей?
https://habr.com/ru/articles/1020728/
#скриптовый_язык #bash #функциональное_программирование #REPL #автоматизация #open_source #Rust #Lisp #Haskell
-
Почему я перестал писать bash-скрипты и написал свой язык
Время от времени мне нужно выполнить примитивный сценарий в терминале, но каждый раз это заканчивается очередным гуглежом «bash iterate each file» или «bash file has string». А что если скрипты в терминале можно было бы писать прямо как поток декларативных мыслей?
https://habr.com/ru/articles/1020728/
#скриптовый_язык #bash #функциональное_программирование #REPL #автоматизация #open_source #Rust #Lisp #Haskell
-
One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
-
One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
-
One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
-
One from the archives for #TextmodeTuesday. The post might be 3 years old, but I'm still using these snippets almost daily to visualize and debug data whilst I'm working in the Node REPL...
-
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
-
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
-
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
-
Whistler: Live eBPF Programming from the Common Lisp REPL
https://atgreen.github.io/repl-yell/posts/whistler/
#HackerNews #Whistler #eBPF #CommonLisp #REPL #Programming #LiveCoding
-
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺
https://codeberg.org/spritely/hoot-repl/pulls/4
Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺
-
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺
https://codeberg.org/spritely/hoot-repl/pulls/4
Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺
-
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺
https://codeberg.org/spritely/hoot-repl/pulls/4
Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺
-
@cwebber So to fix that, let me tell you about the PR for spritely hoot-repl that reduces load times of the #Guile #Scheme web #REPL in #webassembly by at least 30% ☺
https://codeberg.org/spritely/hoot-repl/pulls/4
Though I’m sure you already know, so this is just an "I answered the review" notification, but more interesting than something about LLM agents ☺
-
so, apparently hacking #scheme is going to get even more fun with B.L.U.E., a sane, extendable, lisp-y l, agnostic build system and #Ares, the interactive hacking tool we always sensed was missing from our work. Yes, we now have insightful backtraces in #guile!
The future has come!
https://codeberg.org/lapislazuli/blue
https://git.sr.ht/~abcdw/guile-ares-rs#guix #fosdem #fosdem2026 #blue #lisp #repl #buildsystem #reproducibility #hacking #fun #coding #interactiveprogramming
-
so, apparently hacking #scheme is going to get even more fun with B.L.U.E., a sane, extendable, lisp-y l, agnostic build system and #Ares, the interactive hacking tool we always sensed was missing from our work. Yes, we now have insightful backtraces in #guile!
The future has come!
https://codeberg.org/lapislazuli/blue
https://git.sr.ht/~abcdw/guile-ares-rs#guix #fosdem #fosdem2026 #blue #lisp #repl #buildsystem #reproducibility #hacking #fun #coding #interactiveprogramming
-
so, apparently hacking #scheme is going to get even more fun with B.L.U.E., a sane, extendable, lisp-y l, agnostic build system and #Ares, the interactive hacking tool we always sensed was missing from our work. Yes, we now have insightful backtraces in #guile!
The future has come!
https://codeberg.org/lapislazuli/blue
https://git.sr.ht/~abcdw/guile-ares-rs#guix #fosdem #fosdem2026 #blue #lisp #repl #buildsystem #reproducibility #hacking #fun #coding #interactiveprogramming
-
so, apparently hacking #scheme is going to get even more fun with B.L.U.E., a sane, extendable, lisp-y l, agnostic build system and #Ares, the interactive hacking tool we always sensed was missing from our work. Yes, we now have insightful backtraces in #guile!
The future has come!
https://codeberg.org/lapislazuli/blue
https://git.sr.ht/~abcdw/guile-ares-rs#guix #fosdem #fosdem2026 #blue #lisp #repl #buildsystem #reproducibility #hacking #fun #coding #interactiveprogramming
-
@akkartik Since #Forth is just so great for super concise code, allow me to add another example, here to transpile (a subset of) Forth into GLSL for livecoding shaders. This one is using my old 2015 CharlieVM and you can find all the example source snippets in the readme here:
https://github.com/thi-ng/charlie
The REPL itself live at:
https://forth.thi.ng/The attached screen capture shows 4 shader examples (longest one is 12 lines of code)
-
@akkartik Since #Forth is just so great for super concise code, allow me to add another example, here to transpile (a subset of) Forth into GLSL for livecoding shaders. This one is using my old 2015 CharlieVM and you can find all the example source snippets in the readme here:
https://github.com/thi-ng/charlie
The REPL itself live at:
https://forth.thi.ng/The attached screen capture shows 4 shader examples (longest one is 12 lines of code)
-
@akkartik Since #Forth is just so great for super concise code, allow me to add another example, here to transpile (a subset of) Forth into GLSL for livecoding shaders. This one is using my old 2015 CharlieVM and you can find all the example source snippets in the readme here:
https://github.com/thi-ng/charlie
The REPL itself live at:
https://forth.thi.ng/The attached screen capture shows 4 shader examples (longest one is 12 lines of code)
-
@akkartik Since #Forth is just so great for super concise code, allow me to add another example, here to transpile (a subset of) Forth into GLSL for livecoding shaders. This one is using my old 2015 CharlieVM and you can find all the example source snippets in the readme here:
https://github.com/thi-ng/charlie
The REPL itself live at:
https://forth.thi.ng/The attached screen capture shows 4 shader examples (longest one is 12 lines of code)
-
In the #Python programming language, the new #REPL from Python 3.13 (2024) has added colorization in the #interpreter in #interactive Python, similar to the interface seen in later versions of #PyPy. Python 3.14 (2025) and Python 3.15 (2026) continue along with the improved REPL with the colorization of the Python #syntax itself.
-
In the #Python programming language, the new #REPL from Python 3.13 (2024) has added colorization in the #interpreter in #interactive Python, similar to the interface seen in later versions of #PyPy. Python 3.14 (2025) and Python 3.15 (2026) continue along with the improved REPL with the colorization of the Python #syntax itself.
-
In the #Python programming language, the new #REPL from Python 3.13 (2024) has added colorization in the #interpreter in #interactive Python, similar to the interface seen in later versions of #PyPy. Python 3.14 (2025) and Python 3.15 (2026) continue along with the improved REPL with the colorization of the Python #syntax itself.
-
In the #Python programming language, the new #REPL from Python 3.13 (2024) has added colorization in the #interpreter in #interactive Python, similar to the interface seen in later versions of #PyPy. Python 3.14 (2025) and Python 3.15 (2026) continue along with the improved REPL with the colorization of the Python #syntax itself.
-
At this point, I think I'm satisfied with the vim-go plugin providing me with a stoopid simple template for prototyping very basic example programs.
It's definitely not a #REPL like I'm used to with #Python or running from the #CLI; but, it's a bit of a useful workflow to get started. *shrug*
These keybindings help a bit:
```
augroup go
autocmd!
autocmd BufNewFile,BufRead *.go setlocal
\ noexpandtab
\ tabstop=4
\ shiftwidth=4
autocmd FileType go nmap <leader>b :<C-u>call <SID>build_go_files()<CR>
autocmd FileType go nmap <leader>d <Plug>(go-doc)
autocmd FileType go nmap <leader>f <Plug>(go-fmt)
autocmd FileType go nmap <leader>i <Plug>(go-info)
autocmd FileType go nmap <leader>l <Plug>(go-lint)
autocmd FileType go nmap <leader>r <Plug>(go-run)
autocmd FileType go nmap <leader>v <Plug>(go-vet)
autocmd FileType go nmap <leader>t <Plug>(go-test)
autocmd FileType go nmap <Leader>c <Plug>(go-coverage-toggle)
augroup END
``` -
At this point, I think I'm satisfied with the vim-go plugin providing me with a stoopid simple template for prototyping very basic example programs.
It's definitely not a #REPL like I'm used to with #Python or running from the #CLI; but, it's a bit of a useful workflow to get started. *shrug*
These keybindings help a bit:
```
augroup go
autocmd!
autocmd BufNewFile,BufRead *.go setlocal
\ noexpandtab
\ tabstop=4
\ shiftwidth=4
autocmd FileType go nmap <leader>b :<C-u>call <SID>build_go_files()<CR>
autocmd FileType go nmap <leader>d <Plug>(go-doc)
autocmd FileType go nmap <leader>f <Plug>(go-fmt)
autocmd FileType go nmap <leader>i <Plug>(go-info)
autocmd FileType go nmap <leader>l <Plug>(go-lint)
autocmd FileType go nmap <leader>r <Plug>(go-run)
autocmd FileType go nmap <leader>v <Plug>(go-vet)
autocmd FileType go nmap <leader>t <Plug>(go-test)
autocmd FileType go nmap <Leader>c <Plug>(go-coverage-toggle)
augroup END
``` -
At this point, I think I'm satisfied with the vim-go plugin providing me with a stoopid simple template for prototyping very basic example programs.
It's definitely not a #REPL like I'm used to with #Python or running from the #CLI; but, it's a bit of a useful workflow to get started. *shrug*
These keybindings help a bit:
```
augroup go
autocmd!
autocmd BufNewFile,BufRead *.go setlocal
\ noexpandtab
\ tabstop=4
\ shiftwidth=4
autocmd FileType go nmap <leader>b :<C-u>call <SID>build_go_files()<CR>
autocmd FileType go nmap <leader>d <Plug>(go-doc)
autocmd FileType go nmap <leader>f <Plug>(go-fmt)
autocmd FileType go nmap <leader>i <Plug>(go-info)
autocmd FileType go nmap <leader>l <Plug>(go-lint)
autocmd FileType go nmap <leader>r <Plug>(go-run)
autocmd FileType go nmap <leader>v <Plug>(go-vet)
autocmd FileType go nmap <leader>t <Plug>(go-test)
autocmd FileType go nmap <Leader>c <Plug>(go-coverage-toggle)
augroup END
``` -
At this point, I think I'm satisfied with the vim-go plugin providing me with a stoopid simple template for prototyping very basic example programs.
It's definitely not a #REPL like I'm used to with #Python or running from the #CLI; but, it's a bit of a useful workflow to get started. *shrug*
These keybindings help a bit:
```
augroup go
autocmd!
autocmd BufNewFile,BufRead *.go setlocal
\ noexpandtab
\ tabstop=4
\ shiftwidth=4
autocmd FileType go nmap <leader>b :<C-u>call <SID>build_go_files()<CR>
autocmd FileType go nmap <leader>d <Plug>(go-doc)
autocmd FileType go nmap <leader>f <Plug>(go-fmt)
autocmd FileType go nmap <leader>i <Plug>(go-info)
autocmd FileType go nmap <leader>l <Plug>(go-lint)
autocmd FileType go nmap <leader>r <Plug>(go-run)
autocmd FileType go nmap <leader>v <Plug>(go-vet)
autocmd FileType go nmap <leader>t <Plug>(go-test)
autocmd FileType go nmap <Leader>c <Plug>(go-coverage-toggle)
augroup END
``` -
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
✨ Behold, the latest marvel from the "cutting-edge" tech minds: an "enhanced" #REPL for Common #Lisp that promises to revolutionize your coding experience by doing...well, exactly what a REPL already does. 🤯 But don't worry, it comes with the added bonus of making you question why you ever thought coding was fun in the first place. 🥳
https://github.com/atgreen/icl #cuttingedge #techinnovation #codinghumor #softwaredevelopment #HackerNews #ngated -
✨ Behold, the latest marvel from the "cutting-edge" tech minds: an "enhanced" #REPL for Common #Lisp that promises to revolutionize your coding experience by doing...well, exactly what a REPL already does. 🤯 But don't worry, it comes with the added bonus of making you question why you ever thought coding was fun in the first place. 🥳
https://github.com/atgreen/icl #cuttingedge #techinnovation #codinghumor #softwaredevelopment #HackerNews #ngated -
✨ Behold, the latest marvel from the "cutting-edge" tech minds: an "enhanced" #REPL for Common #Lisp that promises to revolutionize your coding experience by doing...well, exactly what a REPL already does. 🤯 But don't worry, it comes with the added bonus of making you question why you ever thought coding was fun in the first place. 🥳
https://github.com/atgreen/icl #cuttingedge #techinnovation #codinghumor #softwaredevelopment #HackerNews #ngated -
Libervia CLI Tip 20:
There is a REPL in the CLI that you can launch with `li shell`.
Inside, you can select a command or sub-command with `cmd`, and fix an argument with `use`:
> cmd pubsub
pubsub> use service pubsub.example.orgThen just enter the sub-command to run it on the given service:
pubsub> affiliations
This is equivalent to `li pubsub affiliation -s pubsub.example.org`.
This is handy for exploring services.
https://libervia.org/__b/doc/backend/libervia-cli/shell.html
-
Libervia CLI Tip 20:
There is a REPL in the CLI that you can launch with `li shell`.
Inside, you can select a command or sub-command with `cmd`, and fix an argument with `use`:
> cmd pubsub
pubsub> use service pubsub.example.orgThen just enter the sub-command to run it on the given service:
pubsub> affiliations
This is equivalent to `li pubsub affiliation -s pubsub.example.org`.
This is handy for exploring services.
https://libervia.org/__b/doc/backend/libervia-cli/shell.html