#lualang — Public Fediverse posts
Live and recent posts from across the Fediverse tagged #lualang, aggregated by home.social.
-
It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).
-
It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).
-
It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).
-
It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).
-
It's a bit unfortunate that #Lua is a much more common word in Portuguese than #Python is in English. I see that there is the #LuaLang hashtag, and I'm sure it gets more use than #PythonLang (relative to the popularity of each programming language).
-
The best example of distilled software that comes to mind is Project #Oberon, which was distilled by Niklaus Wirth (and others) for most of Wirth's lifetime (if you count his earlier time working on Pascal and Modula as earlier steps of the distillation). https://projectoberon.net/
There are also #Forth and #Lisp, of course, but they've been distilled in many different directions by many people so there isn't a clear unifying idea. You have to get more specific. Now, Chuck Moore's evolution of Forth -> MachineForth -> ColorForth certainly counts as distillation.
#LuaLang also comes to mind. Porting the most modern Lua to the 188K TI-92+ calculator (last year) is what sold me on the idea that widely used modern software can remain useful on the oldest computers. That said, Lua is not entirely immune to bloat: I had to roll back from v5.4 to v5.2 to cut my memory usage from ~170K to ~128K 😉
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
While I was working on this, the article Python Numbers Every Programmer Should Know appeared on the orange website. In #LuaLang, and on a 16-bit target, these overheads are less -- for example, a number weighs 10 bytes instead of 24 bytes -- but overheads don't have much place to hide on a small, slow machine.
(Btw numbers cost 7 bytes each in 8-bit Microsoft BASIC so Lua isn't gratuitously inefficient here, even by the standards of 50 years ago.)
One place that makes overhead really obvious: a 64K segment holds a table of length, at most, 4,096 entries. That's 40,960 bytes, and Lua's strategy is to double allocation size every time it wants to grow the table. 2 x 40,960 exceeds a 64K segment, so 4,096 entries is the growth limit.
On a 640K machine, after deducting the ~250K (!) size of the interpreter (which is also fully loaded into RAM), you'll get maybe five full segments free if you're lucky. So that's like maybe 20,000 datums total, split across five tables.
Meanwhile a tiny-model #Forth / assembly / C program could handle 20,000 datums in a single segment without breaking too much of a sweat!
The efficiency has costs to programmer time, of course. Worrying about data types, limits, overflows, etc. The kinds of things I was hoping to avoid by using Lua on this hardware -- and to its credit, it does a good job insulating me from them. Its cost is that programs must be rewritten for speed in some other language once out of the rapid prototyping phase and having reasonable speed / data capacity becomes important.
I'd estimate the threshold where traditional interpreters like Lua become okay for finished/polished software of any significant scope, is somewhere around 2MB RAM / 16MHz. So think, like, a base model 386. Maybe this is why the bulk of interpreters available in DOS are via DJGPP which requires a 386 or better anyway.
#BASIC was of course used on much smaller hardware, but was famously unsuited to speed or to large programs / data.
I know success stories for #Lisp in kilobytes of memory, but I'm not quite sure how they do it / to what extent the size of the interpreter, and overhead of data representation (tags + cons representation), eats into available memory and limits the scope of the program, as seen with other traditional interpreters.
This is beginning to explain why #Forth has such a niche on small systems. It has damn near zero size overhead on data structures. (The only overhead is for the interpreter core (a few K) and storing string names in the dictionary (which can be eliminated via various tricks)). ~1x size and ~10x speed overhead is the bargain of the century to unlock #repl based development. However, you're still stuck with the agonizing pain of manual memory management and numeric range problems / overflows. Which is probably why the world didn't stop with Forth, but continued on to bigger interpreters.
-
My overnight activity on New Year's Eve was to rewrite the #uuencode utility that I lost in a battery-exhaustion incident. The old version was in #Forth, the new version in #LuaLang. Including interpreter size, the Lua version is 100x larger and 100x slower. I was not intending to provide a case study upholding Jeff Fox's writings about Forth efficiency, but there you go.
-
My overnight activity on New Year's Eve was to rewrite the #uuencode utility that I lost in a battery-exhaustion incident. The old version was in #Forth, the new version in #LuaLang. Including interpreter size, the Lua version is 100x larger and 100x slower. I was not intending to provide a case study upholding Jeff Fox's writings about Forth efficiency, but there you go.
-
My overnight activity on New Year's Eve was to rewrite the #uuencode utility that I lost in a battery-exhaustion incident. The old version was in #Forth, the new version in #LuaLang. Including interpreter size, the Lua version is 100x larger and 100x slower. I was not intending to provide a case study upholding Jeff Fox's writings about Forth efficiency, but there you go.
-
My overnight activity on New Year's Eve was to rewrite the #uuencode utility that I lost in a battery-exhaustion incident. The old version was in #Forth, the new version in #LuaLang. Including interpreter size, the Lua version is 100x larger and 100x slower. I was not intending to provide a case study upholding Jeff Fox's writings about Forth efficiency, but there you go.
-
My overnight activity on New Year's Eve was to rewrite the #uuencode utility that I lost in a battery-exhaustion incident. The old version was in #Forth, the new version in #LuaLang. Including interpreter size, the Lua version is 100x larger and 100x slower. I was not intending to provide a case study upholding Jeff Fox's writings about Forth efficiency, but there you go.
-
After adding the missing size check, #LuaLang behavior is much more benign.
A 64K segment size limit on table sizes isn't ideal, but it beats a hard crash, and it's a stable jumping-off point for further modifications.
-
#adventOfCode day 12 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/12.lua
- PC - 2ms
- Raspberry Pi 2: 76ms
- #ti92 Plus: Crashed
I wasn't expecting that to work!
And as usual, AoC is a good source of stressing cases to expose crashes/bugs in the #ticalc Lua port :p
Merry Christmas!
This is the first Advent of Code I've ever completed! -
#adventOfCode day 10 in #LuaLang and #Mathematica
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/10.lua
- PC - 487 ms
- Raspberry Pi 4: a few seconds
- #ti92 Plus: N/A
Ok, finally all caught up and looking forward to some sleep and Day 12!
After a night and day in math land confusing myself with row echelon matrices and intersecting N-spaces, I remembered that I have a Raspberry Pi that for some reason has free preinstalled Mathematica.
So my Lua program code-gens a Mathematica program, which then runs on the Pi to solve Part B!
This generated code is checked in if you want to look at it - it's several thousand lines of simultaneous equations being solved with constraints applied: https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/10.m
Given all that, it's pleasantly fast. Mathematica over VNC on wifi is pretty laggy but the actual execution couldn't have taken more than a second or two!
(Yes, I did attempt to solve the equations on the TI-92+ #ticalc, as it has a very capable computer algebra system, but I couldn't figure out how to apply all the necessary constraints -- maybe later.)
-
#adventOfCode day 11 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/11.lua
- PC - 1m 10s
- Raspberry Pi 2: N/A
- #ti92 Plus: N/A
EDIT: wow I added 3 lines and now this is one of my fastest programs
- PC - 2.1 ms
- Raspberry Pi 2 - 65 ms
- #ti92 Plus: N/A
Yeah, I don't have day 10 part B results to share yet. However, I took a break from that to do day 11!
It's brute force with a small twist to make it finish before the heat death of the universe. And it's based on intimate knowledge of my input file, so I don't know if it generalizes to others. At least the program itself is short....
Bonus: puzzle input visualized in graphviz!
-
#adventOfCode day 9 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/09.lua
- PC - 148 ms
- Raspberry Pi 2: 6.133 sec
- #ti92 Plus: out of memory
I'm very pleased with the speed here, although I'm sure looking at other peoples' solutions will deflate some of that pride.
The feasibility of the whole solution relied on a big insight I got while staring at the grid far too long. To avoid spoilers, I won't elaborate here.
My original version took about 4 seconds on PC: I owe some of the later ~20x optimizations to studying a friend's Rust solution after we both got ours working.
-
#adventOfCode day 8 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/08.lua
- PC - 1.79 sec
- Raspberry Pi 2: 57 sec
- #ti92 Plus: hahaha, no - judging by the scale factor from previous problems, it'd take about two months, if it fit in ram, which it decidedly does not.
Eric is sadistic putting the first big problem on a work night, huh?
I'm lucky that breaking Part A into steps, then stringing together well-intentioned but non-optimal solutions to each step, was good enough to return an answer before the heat death of the universe.
I'm not sure how to make my Part A more efficient - it is O(N2 ) and takes a good 500ms by itself on my PC. It'll be really interesting to learn faster ways from other peoples' solutions.
My Part B is doing obvious repeated work. I left some performance on the table to solve the problem sooner using building blocks I already had from Part A.
Getting this working on a TI-92+ or other retro platform seems like a daunting task!
-
#adventOfCode day 7 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/07.lua
- PC - 1.5 ms
- Raspberry Pi 2: 41 ms
- #ti92 Plus: >18 min
After a couple of days where the TI-92+ has been disagreeable, it was refreshing to get a puzzle where the Lua solution Just Works without memory exhaustion. Though, speedy it ain't. And when I got my camera out to photograph the real calc, I found it had crashed, so you get a boring emulator screenshot of it working instead.... ;)
I'm a little confused why it's this slow on the #ticalc: something about it seems difficult compared to the other working examples. I think it's because my approach generates lots of garbage so Lua's gc is working hard.
-
For anyone interested in the port of #LuaLang I'm using for #AdventOfCode on the #Ti92 Plus graphing calculator, I've uploaded the patched Lua sources and prebuilt binaries for TI-89 and TI-92+ to https://gitlab.cs.washington.edu/fidelp/lua92
Please let me know if you try it out! -
#adventOfCode day 6 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/06.lua
- PC - 5.1 ms
- Raspberry Pi 2: 147 ms
- #ti92 Plus: out of memory
I'm steadily accumulating a backlog that I need to rewrite into C for the #ticalc. I'm really jonesing to switch back to the DOS-based HP 200LX palmtop... 640K RAM feels mighty spacious in comparison to the calculator. Maybe in my copious spare time I need to track down the memory corruption problem that's stopping the 16-bit MS-DOS Lua from working.
-
#adventOfCode day 5 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/05.lua
- PC - 1.7 ms
- Raspberry Pi 2: 49 ms
- #ti92 Plus: out of memory
The #ticalc doesn't make it through input parsing before exhausting memory. It will be necessary to rewrite in C again.
-
My port of #LuaLang to TI-92+ had a bug where
math.hugewas accidentally NaN instead of +∞. This is now fixed, so we can infinitely loop the fun way.I like that the screen's slow update speed is clearly visible in the photo....
-
#adventOfCode day 3 in #LuaLang, update: Execution time on my real #ti92 was 17m12s.
The TI-92+ appears to run Lua programs at 1/10,000 the speed of my Raspberry Pi 2B: that is, take the RPi execution time, shift to the next SI size category (like milliseconds to seconds) then multiply by ten, and you'll be in the ballpark.
Meanwhile, the Raspberry Pi is about 1/50 the speed of my desktop.
-
#adventOfCode day 3 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/03.lua
- PC - 5.4 ms
- Raspberry Pi 2: 156 ms
- #ti92 Plus: ??? minutes (still running)
The program runs successfully in TiEmu with the emulation speed multiplier unlocked. It has yet to be seen how long it'll take on the real calc...
-
Adapted #adventOfCode 2025 day 2 in #LuaLang to use a much more efficient algorithm to fit on #ti92 Plus.
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/02fast.lua
- Desktop: 1.1 ms
- Raspberry Pi 2B: 20 ms
This also found a bug in my port of Lua:
math.hugewas the wrong value, so a particular loop was never running. -
Because the post upthread has poor video quality, here's a higher res photo of the 2025 day 1 result on TI-92+.
#ticalc #ti92 #adventOfCode #LuaLang #retrocomputing -
Adapted #adventOfCode 2025 day 1 in #LuaLang to use less memory to fit on #ti92 Plus. Execution time: 2m 42s.
The TI-92+ has a 12 MHz 68000 and 188 KB RAM that is also used as the calculator's main ramdisk, meaning, the Lua interpreter, script, puzzle input, and all datastructures must fit in that size. The calculator runs on 4 AA batteries with a battery life of "well beyond a school year".
-
#adventOfCode day 1 in #LuaLang
https://gitlab.cs.washington.edu/fidelp/advent-of-code-2025/-/blob/main/01.lua
- PC: 1 ms
- Raspberry Pi 2: 70 ms
- #hp200lx: Error after 2 minutes. I wonder if the garbage collector settings need to be tuned to avoid memory exhaustion.
-
My #adventOfCode solutions this year will be in #LuaLang and will be ranked by the smallest machine they run on:
-
Putting Lua through its paces. Here's Advent of Code 2024, Day 13, running on all my favorite hardware that I had on hand, through the power of Lua.
Ryzen 5 9600X modern desktop - 1 ms.
HP 200LX, 8 MHz 186, 640 KB RAM available to Lua: 62 seconds.
TI-92+, 12 MHz 68000, 64 KB RAM available to Lua: 65 seconds.
"Write once, run anywhere".
-
LuaRocks 3.12.0 is now released, the first release after the codebase was converted to Teal!
Congrats to my mentee Victor Ilchev, who performed the conversion as his summer project for GSoC 2024. His work should be hitting production CIs worldwide by now!
-
Thinking about adding Lua scripting.
How does one decide between Lua 5.4, 5.3, 5.2, 5.1 (yay, LuaJIT) or Luau?
-
🎵 Lua developers, can you code "Bohemian Rhapsody"? 🌙🖥️ Join the challenge and get your lyrical code on a t-shirt!
👉 Full Details here: https://blog.carolina.codes/p/easy-come-easy-go-will-you-let-mecode
#CarolinaCodes #LyricalCodeChallenge #LuaLang
https://blog.carolina.codes/p/easy-come-easy-go-will-you-let-mecode -
Lua je zajímavý minimalistický programovací jazyk. Má jen 150kb, umí toho ale hodně. A je rychlý (na to, že je skriptovací / interpretovaný).
Jenže funguje na můj vkus nějak nelogicky :-D Asi to chce delší seznámení. Screenshot z AndroidOS mobilu (Termux)
#LuaLang #programovani -
Lua je zajímavý minimalistický programovací jazyk. Má jen 150kb, umí toho ale hodně. A je rychlý (na to, že je skriptovací / interpretovaný).
Jenže funguje na můj vkus nějak nelogicky :-D Asi to chce delší seznámení. Screenshot z AndroidOS mobilu (Termux)
#LuaLang #programovani -
Lua je zajímavý minimalistický programovací jazyk. Má jen 150kb, umí toho ale hodně. A je rychlý (na to, že je skriptovací / interpretovaný).
Jenže funguje na můj vkus nějak nelogicky :-D Asi to chce delší seznámení. Screenshot z AndroidOS mobilu (Termux)
#LuaLang #programovani -
Lua je zajímavý minimalistický programovací jazyk. Má jen 150kb, umí toho ale hodně. A je rychlý (na to, že je skriptovací / interpretovaný).
Jenže funguje na můj vkus nějak nelogicky :-D Asi to chce delší seznámení. Screenshot z AndroidOS mobilu (Termux)
#LuaLang #programovani -
Lua je zajímavý minimalistický programovací jazyk. Má jen 150kb, umí toho ale hodně. A je rychlý (na to, že je skriptovací / interpretovaný).
Jenže funguje na můj vkus nějak nelogicky :-D Asi to chce delší seznámení. Screenshot z AndroidOS mobilu (Termux)
#LuaLang #programovani -
Projects on my plate (in no particular order; or maybe it is in priority-order):
1. My personal #Hugo / #GoHugo boilerplate (with #a11y (accessibility), #microformats, #fediverse, #IndieWeb, support)
2. #Filipino language in #Hangeul. (Temporarily calling it #FilipinoHangeul.)
So far, I've mapped the IPA phonemic between Korean #Hangul and the Filipino language.
Inspired by:
a. #CiaCial Hangeul (actually in use)
b. #TaiwaneseHangul
c. #FilipinoHanzi (Filipino language in Hanzi [Chinese script])
d. Taiwanese Kana3. #AnsalonMUD #MUDlet client.
I'm porting our #Lua / #LuaLang scripts from #MUSHclient to MUDlet, as well as, create a new UI and other MUDlet widgets.
I like the current version of MUDlet, it has come far since I last tried it; and personally, is now better than MUSHclient. Not only that, MUDlet is cross-platform while MUSHclient is Windows only. Since I'm using #Linux, a native client is much preferred than using #WINE.
4. An update to the #Philippines Unicode Keyboard Layout.
'Was put on-hold indefinitely. There is a plan to submit a bill to the Senate and Lower House to standardised keyboards and keyboard layout for the Philippines.
Whatever becomes the “law”, will be the next update for PUKL.
Layouts planned:
* A true #Baybayin layout.
* QWERTY (with Baybayin)
* #Colemak (with Baybayin)
* #Dvorak (with Baybayin)Standardising this will ensure that the default keyboard layout for Windows, Mac, Linux, Android, iOS, will be the one we designed for Philippine / Filipino use.
In addition to that, physical keyboards will have the same layout, instead of keys flying here and there. If we need an extra key, then we'll include an extra key (like in the Japanese and Korean keyboards).
For this project, it's going to take a long time because my country is terrible when it comes to standardisation. Imagine this, only government agencies are required to use the SI/Metric system. Everyone else can use whatever they want, SI, Metric, Imperial, Traditional, or alien. (This is another project I'm thinking of taking on much later.)
-
Em vez de fazer o que eu devia estar fazendo, obviamente eu estou procrastinando procurando uma alternativa ao gravador de tela em GIF #Peek que eu usava para preparar material de apoio às aulas. Achei um #Gifine, que parece que é escrito em #Lualang, o que me obrigou a instalar #luarocks, e como eu sou tosco no pamac ele está atualizanfo neste momento 2GB de coisas no meu Manjaro.
-
Announcing LuaRocks 3.0.0beta1!