home.social

#schemelang — Public Fediverse posts

Live and recent posts from across the Fediverse tagged #schemelang, aggregated by home.social.

  1. In going through some old papers, I ran across these very interesting documents from long ago that I can't seem to find public reference to. They seem to offer some important historical insight about the Dylan language. This is from back when Dylan was called Ralph as a working title. In those days, the still-being-designed Lisp-like language had not yet moved to an infix syntax, and it looked and acted more like Scheme with an object system similar in spirit to CLOS (the Common Lisp Object System).

    My understanding is that there were some fairly deliberate choices made to NOT target the Lisp or Scheme community as users, which is part of why the move to infix. I think they wanted to appeal to a disaffected C++ crowd, but ultimately lost out to Java for that bid, and then having left the Lisp user base behind, ended up with a very small community as a result.

    But I still think there could be things the Scheme community would want to glean from this snapshot of history.

    I've included a scan of an email proposal I got from Dave Moon while he and I were at Symbolics, with his proposal for how to add conditions to the language. Note that Dylan did eventually go public and did have a condition system, so you could also just study that design directly. But what's useful here is to see how all that looked syntactically in a Scheme-like syntax. But, in that regard, I recommend starting by looking at the language itself.

    [0] Ralph: A Dynamic Language with Efficient Application Delivery, by Andrew LM Shalit, July 25, 1991.
    nhplace.com/kent/History/dylan

    [1] Ralph Conditions (part 1 of 2)
    nhplace.com/kent/History/dylan

    [2] Ralph Conditions (part 2 of 2)
    nhplace.com/kent/History/dylan

    cc @sigue @ramin_hal9001 @screwlisp

    #DylanLang #RalphLang #ComputerHistory #Harlequin #Lisp #CommonLisp #ConditionSystem #ConditionHandling #ErrorSystem #Scheme #SchemeLang #CLOS #AppleHistory #KentsHistoryProject

  2. In going through some old papers, I ran across these very interesting documents from long ago that I can't seem to find public reference to. They seem to offer some important historical insight about the Dylan language. This is from back when Dylan was called Ralph as a working title. In those days, the still-being-designed Lisp-like language had not yet moved to an infix syntax, and it looked and acted more like Scheme with an object system similar in spirit to CLOS (the Common Lisp Object System).

    My understanding is that there were some fairly deliberate choices made to NOT target the Lisp or Scheme community as users, which is part of why the move to infix. I think they wanted to appeal to a disaffected C++ crowd, but ultimately lost out to Java for that bid, and then having left the Lisp user base behind, ended up with a very small community as a result.

    But I still think there could be things the Scheme community would want to glean from this snapshot of history.

    I've included a scan of an email proposal I got from Dave Moon while he and I were at Symbolics, with his proposal for how to add conditions to the language. Note that Dylan did eventually go public and did have a condition system, so you could also just study that design directly. But what's useful here is to see how all that looked syntactically in a Scheme-like syntax. But, in that regard, I recommend starting by looking at the language itself.

    [0] Ralph: A Dynamic Language with Efficient Application Delivery, by Andrew LM Shalit, July 25, 1991.
    nhplace.com/kent/History/dylan

    [1] Ralph Conditions (part 1 of 2)
    nhplace.com/kent/History/dylan

    [2] Ralph Conditions (part 2 of 2)
    nhplace.com/kent/History/dylan

    cc @sigue @ramin_hal9001 @screwlisp

    #DylanLang #RalphLang #ComputerHistory #Harlequin #Lisp #CommonLisp #ConditionSystem #ConditionHandling #ErrorSystem #Scheme #SchemeLang #CLOS #AppleHistory #KentsHistoryProject

  3. In going through some old papers, I ran across these very interesting documents from long ago that I can't seem to find public reference to. They seem to offer some important historical insight about the Dylan language. This is from back when Dylan was called Ralph as a working title. In those days, the still-being-designed Lisp-like language had not yet moved to an infix syntax, and it looked and acted more like Scheme with an object system similar in spirit to CLOS (the Common Lisp Object System).

    My understanding is that there were some fairly deliberate choices made to NOT target the Lisp or Scheme community as users, which is part of why the move to infix. I think they wanted to appeal to a disaffected C++ crowd, but ultimately lost out to Java for that bid, and then having left the Lisp user base behind, ended up with a very small community as a result.

    But I still think there could be things the Scheme community would want to glean from this snapshot of history.

    I've included a scan of an email proposal I got from Dave Moon while he and I were at Symbolics, with his proposal for how to add conditions to the language. Note that Dylan did eventually go public and did have a condition system, so you could also just study that design directly. But what's useful here is to see how all that looked syntactically in a Scheme-like syntax. But, in that regard, I recommend starting by looking at the language itself.

    [0] Ralph: A Dynamic Language with Efficient Application Delivery, by Andrew LM Shalit, July 25, 1991.
    nhplace.com/kent/History/dylan

    [1] Ralph Conditions (part 1 of 2)
    nhplace.com/kent/History/dylan

    [2] Ralph Conditions (part 2 of 2)
    nhplace.com/kent/History/dylan

    cc @sigue @ramin_hal9001 @screwlisp

    #DylanLang #RalphLang #ComputerHistory #Harlequin #Lisp #CommonLisp #ConditionSystem #ConditionHandling #ErrorSystem #Scheme #SchemeLang #CLOS #AppleHistory #KentsHistoryProject

  4. In going through some old papers, I ran across these very interesting documents from long ago that I can't seem to find public reference to. They seem to offer some important historical insight about the Dylan language. This is from back when Dylan was called Ralph as a working title. In those days, the still-being-designed Lisp-like language had not yet moved to an infix syntax, and it looked and acted more like Scheme with an object system similar in spirit to CLOS (the Common Lisp Object System).

    My understanding is that there were some fairly deliberate choices made to NOT target the Lisp or Scheme community as users, which is part of why the move to infix. I think they wanted to appeal to a disaffected C++ crowd, but ultimately lost out to Java for that bid, and then having left the Lisp user base behind, ended up with a very small community as a result.

    But I still think there could be things the Scheme community would want to glean from this snapshot of history.

    I've included a scan of an email proposal I got from Dave Moon while he and I were at Symbolics, with his proposal for how to add conditions to the language. Note that Dylan did eventually go public and did have a condition system, so you could also just study that design directly. But what's useful here is to see how all that looked syntactically in a Scheme-like syntax. But, in that regard, I recommend starting by looking at the language itself.

    [0] Ralph: A Dynamic Language with Efficient Application Delivery, by Andrew LM Shalit, July 25, 1991.
    nhplace.com/kent/History/dylan

    [1] Ralph Conditions (part 1 of 2)
    nhplace.com/kent/History/dylan

    [2] Ralph Conditions (part 2 of 2)
    nhplace.com/kent/History/dylan

    cc @sigue @ramin_hal9001 @screwlisp

    #DylanLang #RalphLang #ComputerHistory #Harlequin #Lisp #CommonLisp #ConditionSystem #ConditionHandling #ErrorSystem #Scheme #SchemeLang #CLOS #AppleHistory #KentsHistoryProject

  5. In going through some old papers, I ran across these very interesting documents from long ago that I can't seem to find public reference to. They seem to offer some important historical insight about the Dylan language. This is from back when Dylan was called Ralph as a working title. In those days, the still-being-designed Lisp-like language had not yet moved to an infix syntax, and it looked and acted more like Scheme with an object system similar in spirit to CLOS (the Common Lisp Object System).

    My understanding is that there were some fairly deliberate choices made to NOT target the Lisp or Scheme community as users, which is part of why the move to infix. I think they wanted to appeal to a disaffected C++ crowd, but ultimately lost out to Java for that bid, and then having left the Lisp user base behind, ended up with a very small community as a result.

    But I still think there could be things the Scheme community would want to glean from this snapshot of history.

    I've included a scan of an email proposal I got from Dave Moon while he and I were at Symbolics, with his proposal for how to add conditions to the language. Note that Dylan did eventually go public and did have a condition system, so you could also just study that design directly. But what's useful here is to see how all that looked syntactically in a Scheme-like syntax. But, in that regard, I recommend starting by looking at the language itself.

    [0] Ralph: A Dynamic Language with Efficient Application Delivery, by Andrew LM Shalit, July 25, 1991.
    nhplace.com/kent/History/dylan

    [1] Ralph Conditions (part 1 of 2)
    nhplace.com/kent/History/dylan

    [2] Ralph Conditions (part 2 of 2)
    nhplace.com/kent/History/dylan

    cc @sigue @ramin_hal9001 @screwlisp

    #DylanLang #RalphLang #ComputerHistory #Harlequin #Lisp #CommonLisp #ConditionSystem #ConditionHandling #ErrorSystem #Scheme #SchemeLang #CLOS #AppleHistory #KentsHistoryProject

  6. An idea to defeat #GenerativeAI in #FreeSoftware:

    Just use a #ProgrammingLanguage that isn’t popular (e.g. #Haskell or some #Lisp dialect) to write your code, but publish human-readable intermediate form of that code in the public code repositories (e.g. the C programming language). Share the actual source code privately with trusted contributors in non-public branches, and require GPG signatures on actual contributions.

    You could argue that not sharing source code is against the GPL, but the GPL does allow you to share the code as a hard copy printed on paper and sent over snail mail. Or you can just wait until the person asking is an actual human that you can trust not to use the source code for LLM training.

    LLMs are unable to learn unpopular programming languages because they don’t have a sufficient corpus of training data to learn how to code it, so if your receive a contribution in C, thank the contributor but inform them that they will have to rewrite the contribution in your Lisp dialect before you can accept it.

    #Scheme dialects like #Gambit , #Chicken , and #Bigloo would work well for this. So would a #CommonLisp implementation that translates to C such as #ECL . Although keep in mind that the idea is to use a less popular language, so you may have to further obscure these languages a little bit, but not in a way that would be difficult for humans. For example, using a macro system, you could use df instead of define, rename types of things like string? to utf8str?, use generic functions with mulitple dispatch so append will work on strings, lists, vectors, and bytevectors. Small tweaks like this might throw-off an LLM asked to write source code in Lisp.

    #tech #software #LLMs #LLM #FOSS #FLOSS #OpenSource #SchemeLang #R7RS

  7. I was able to finish reading all of “The Genius of Lisp“ by @cdegroot and the whole book was as good as the free preview (chapter 8). I was able to speed-read through the detailed explanations of concepts I already knew, like tail recursion, garbage collection, the Y-combinator, Currying functions, and so on. But there were parts where I slowed down and read carefully, like the section on the Universal Turing Machine, and some of the details of the IBM-704 system architecture. Also the story of how the first Lisp implementation was created when one of McCarthy’s grad students implemented an M-Expression calculator, this was described in slightly more detail than what I recall McCarthy himself explaining in his 1960 paper — that or I had just forgotten those parts of the story.

    The tone of this book reminds me a lot of popular physics books like Stephen Hawking’s “A Brief History of Time,” which was aimed more at general audiences than professionals. That said, there is a lot to enjoy about this book for professionals like myself as well. There are many good stories about the principals designers of Lisp throughout. The sections on the commercialization of Lisp for the first AI boom of the 1970s and it’s subsequent “AI winter,” were very interesting to read. And if you are a teacher, you might like how some of the concepts in the book are explained.

    And I would definitely recommend this very strongly to 3rd-year high school students, or 1st and 2nd year college students, who are more genuinely curious about how computers work and want to know more than just how to make the next billion dollar app.

    The next #LispyGopherClimate show with @screwlisp I look forward to talking about this book some more.

    #tech #software #Lisp #ProgrammingLanguages #SchemeLang #Scheme #Clojure #Emacs #EmacsLisp #RetroComputing #LispyGopherClimateShow

  8. @badrihippo modern frameworks like React, Vue, and Van.js are all very similar, but I have not seen a consistent name for this family of frameworks. I have heard it called “The Elm Architecture,” because they are loosely based on how the Elm programming language originally did GUI programming in the browser. I have also heard it called the Model-View-Update paradigm. But most people just call it “React-like” or “Reactive Programming” because they are all similar to the very popular “React.js” framework.

    Note that this should not be confused with Functional Reactive Programming (FRP), although the two are not completely different. As I understand it, React-like GUIs and FRP can both be implemented on top of a more powerful and more general computation model called “propagators” (here is the PDF of the original Propagators paper).

    @dthompson wrote a really good blog post about FRP, propagators, and React-like frameworks.

    I hope that helps, but I am not as well-versed in the theory of this stuff as I should be.

    Oh, and I should say, before React-like took over the world wide web, GUI programming was mostly intertwined with Object Oriented Programming and design, so a good place to start might be to read up on Smalltalk OOP and GUI design.

    #tech #software #GUI #ReactiveProgramming #FRP #Scheme #Haskell #SchemeLang #Propagators #ElmArchitecture #ReactJS #Smalltalk #OOP #ObjectOriented

  9. @badrihippo modern frameworks like React, Vue, and Van.js are all very similar, but I have not seen a consistent name for this family of frameworks. I have heard it called “The Elm Architecture,” because they are loosely based on how the Elm programming language originally did GUI programming in the browser. I have also heard it called the Model-View-Update paradigm. But most people just call it “React-like” or “Reactive Programming” because they are all similar to the very popular “React.js” framework.

    Note that this should not be confused with Functional Reactive Programming (FRP), although the two are not completely different. As I understand it, React-like GUIs and FRP can both be implemented on top of a more powerful and more general computation model called “propagators” (here is the PDF of the original Propagators paper).

    @dthompson wrote a really good blog post about FRP, propagators, and React-like frameworks.

    I hope that helps, but I am not as well-versed in the theory of this stuff as I should be.

    Oh, and I should say, before React-like took over the world wide web, GUI programming was mostly intertwined with Object Oriented Programming and design, so a good place to start might be to read up on Smalltalk OOP and GUI design.

    #tech #software #GUI #ReactiveProgramming #FRP #Scheme #Haskell #SchemeLang #Propagators #ElmArchitecture #ReactJS #Smalltalk #OOP #ObjectOriented

  10. @badrihippo modern frameworks like React, Vue, and Van.js are all very similar, but I have not seen a consistent name for this family of frameworks. I have heard it called “The Elm Architecture,” because they are loosely based on how the Elm programming language originally did GUI programming in the browser. I have also heard it called the Model-View-Update paradigm. But most people just call it “React-like” or “Reactive Programming” because they are all similar to the very popular “React.js” framework.

    Note that this should not be confused with Functional Reactive Programming (FRP), although the two are not completely different. As I understand it, React-like GUIs and FRP can both be implemented on top of a more powerful and more general computation model called “propagators” (here is the PDF of the original Propagators paper).

    @dthompson wrote a really good blog post about FRP, propagators, and React-like frameworks.

    I hope that helps, but I am not as well-versed in the theory of this stuff as I should be.

    Oh, and I should say, before React-like took over the world wide web, GUI programming was mostly intertwined with Object Oriented Programming and design, so a good place to start might be to read up on Smalltalk OOP and GUI design.

    #tech #software #GUI #ReactiveProgramming #FRP #Scheme #Haskell #SchemeLang #Propagators #ElmArchitecture #ReactJS #Smalltalk #OOP #ObjectOriented

  11. @badrihippo modern frameworks like React, Vue, and Van.js are all very similar, but I have not seen a consistent name for this family of frameworks. I have heard it called “The Elm Architecture,” because they are loosely based on how the Elm programming language originally did GUI programming in the browser. I have also heard it called the Model-View-Update paradigm. But most people just call it “React-like” or “Reactive Programming” because they are all similar to the very popular “React.js” framework.

    Note that this should not be confused with Functional Reactive Programming (FRP), although the two are not completely different. As I understand it, React-like GUIs and FRP can both be implemented on top of a more powerful and more general computation model called “propagators” (here is the PDF of the original Propagators paper).

    @dthompson wrote a really good blog post about FRP, propagators, and React-like frameworks.

    I hope that helps, but I am not as well-versed in the theory of this stuff as I should be.

    Oh, and I should say, before React-like took over the world wide web, GUI programming was mostly intertwined with Object Oriented Programming and design, so a good place to start might be to read up on Smalltalk OOP and GUI design.

    #tech #software #GUI #ReactiveProgramming #FRP #Scheme #Haskell #SchemeLang #Propagators #ElmArchitecture #ReactJS #Smalltalk #OOP #ObjectOriented

  12. New book: “The Genius of Lisp” by Cees de Groot

    Looks like a fascinating read! They have provided chapter 8 as a PDF file downloadable gratis as a sneak-peek into the rest of the book, guess what it’s about:

    Chapter 8: Sussman and Steel make Scheme

    Awesome! I can’t wait to read that chapter, and then the rest of the book!

    Details on the book homepage: https://berksoft.ca/gol/

    #tech #software #Lisp #Scheme #SchemeLang #R7RS

    RE: https://mstdn.ca/@cdegroot/116086771614712320

  13. #Schemacs update

    I decided to merge my #Scheme react-like declarative GUI framework (schemacs ui), even though the back-end isn’t completely bug-free yet. (It is still an experimental software project, so the main branch is the development branch).

    Though it is written in pure #R7RS “small” Scheme, the only GUI back-end currently available is for Guile users who go to the trouble to install Guile-GI all on their own.

    If Guile-GI is installed, put on your safety goggles and run this command in the Guile REPL:

    (load "./main-gui.scm")

    I haven’t tried getting it to work in a Guix shell for almost a year now, but my last attempt did not go well (it crashed while initializing Gtk). To anyone who wants to try the GUI, I am sorry to inconvenience you, but I’m afraid I just have to ask you to please install Guile-GI yourself using the old-fashioned ./configure && make && make install method. If anyone happens to be able to get it to work in a Guix shell, please let me know, or open a PR on the Codeberg Git repository.

    The only examples for how to use (schemacs ui) are in the test suite and the Schemacs Debugger. The only documentation so far are the comments in the source code, though I did try to be very thorough with comments.

    The “Debugui” debugger works, but only has one single feature: the eval-expression command, which is bound to M-: (Alt-Colon). This command works the same as in #Emacs but you enter a Scheme language command instead. The #EmacsLisp interpreter is not yet connected to the GUI.

    Now that this is merged, I am going to work on a few tasks in the Emacs Lisp interpreter that have been pending for more than a few weeks now. Then, back to creating new features in the GUI toward the goal of making it a useful program editor. And also, of course, writing some more documentation.

    #tech #software #R7RS #SchemeLang

  14. #Schemacs update

    I have been banging my head against #Gtk3 for the past 3 weeks and all progress has pretty much come to a stand-still. No matter how simple and straight-forward my GUI is, Gtk makes it simply impossible to get the layout correct. I am now convinced that programming my own layout algorithm from scratch and using the GtkLayout container (which lets you place widgets at arbitrary X,Y coordinates) is the only way to proceed at this point. It is soooo frustrating.

    The #Gtk documentation is good, but not at all good enough. The people on the Gnome Discourse have been very kind and helpful, and I truly appreciate the engagement I have had there, but ultimately I am still not able to solve my problems.

    I have decided I need find some way to keep making progress without postponing the release of the work I have done so far for an indeterminate length of time. So rather than work out all the bugs in this version before merging it to the main Git branch, what I will do instead is have the main program launch a debugger window. The debugger window will have all layout calculated in advance, and all widgets will be declared once and only once throughout the lifetime of the application to avoid the reference counting issues. Obviously the debugger GUI will be very rigid, but you will at least be able to edit files and run commands in a REPL within this debugger.

    Then maybe I can merge the code I have written to the main Git branch, and people will at least be able to use it through the debugger. Maybe also I could use this debugger to help with writing my layout algorithm. Also, I need to get back to the Emacs Lisp interpreter, I haven’t worked on it in almost two months now.

    #tech #software #Lisp #Emacs #EmacsLisp #Scheme #SchemeLang #R7RS

  15. @screwlisp @kentpitman I’m just reading up on the MIT-Scheme condition system. Recent efforts to standardize this are defined in SRFI-255: “Restarting conditions”.

    An older standards condition systems in Scheme was defined in SRFI-35: “Conditions”. And #Guile users can use the Guile implementation of SRFI-35 to make use of it.

    I wish I had known about this two weeks ago when we first started talking about it on the #LispyGopherClimate show, but better late than never, I guess.

    #tech #software #Lisp #CommonLisp #Scheme #SchemeLang #R7RS #MITScheme #Guile #GuileScheme

  16. @screwlisp @kentpitman regarding the discussion we had after the #LispyGopherClimate show ended, MiniKanren is logic programming language embedded in Scheme (sort-of like a Prolog implemented in Scheme and coded with S-expressions), and you can use machine leaning methods like neural networks to guide the search tree of the goal solver mechanism. This paper is an example of what I was talking about.

    Even before LLMs were invented, MiniKanren was able to do program synthesis using purely symbolic logic. They developed a prototype called Barliman where you would provide example input->output pairs as constraints, and using a constraint solver, could generalize those examples to a function that generates any output for any input. As a simple example, you could give it the following input-output pairs:

    1. () -> ()
    2. (a) () -> (a)
    3. () (a) -> (a)
    4. (a) (a) -> (a a)

    …and the constraint solver could determine that you are trying to implement the append function for lists and write the code automatically — without LLMs, using purely symbolic logic.

    As you might expect, the solver could be very slow, or even diverge (never returning an answer). The paper I mentioned above talks about using neural networks to try to guide the constraint solver to improve the performance and usefulness of the results returned by the solver.

    Now imagine applying this technique to other domains besides code generation or optimization, for example, auto-completion, or cache pre-fetching, and building it into a programmable computing environment like Emacs. You could have a tool like “Cursor,” but instead of using LLMs, it uses classical computing and constraint solvers, while taking a fraction of the amount of energy that LLMs use.

    #tech #software #AI #LLM #MachineLearning #NeuralNetwork #ConstraintLogic #ConstraintSolver #LogicProgramming #Prolog #MiniKanren #Emacs #Lisp #Scheme #SchemeLang #ProgramSynthesis

  17. @screwlisp @kentpitman regarding the discussion we had after the #LispyGopherClimate show ended, MiniKanren is logic programming language embedded in Scheme (sort-of like a Prolog implemented in Scheme and coded with S-expressions), and you can use machine leaning methods like neural networks to guide the search tree of the goal solver mechanism. This paper is an example of what I was talking about.

    Even before LLMs were invented, MiniKanren was able to do program synthesis using purely symbolic logic. They developed a prototype called Barliman where you would provide example input->output pairs as constraints, and using a constraint solver, could generalize those examples to a function that generates any output for any input. As a simple example, you could give it the following input-output pairs:

    1. () -> ()
    2. (a) () -> (a)
    3. () (a) -> (a)
    4. (a) (a) -> (a a)

    …and the constraint solver could determine that you are trying to implement the append function for lists and write the code automatically — without LLMs, using purely symbolic logic.

    As you might expect, the solver could be very slow, or even diverge (never returning an answer). The paper I mentioned above talks about using neural networks to try to guide the constraint solver to improve the performance and usefulness of the results returned by the solver.

    Now imagine applying this technique to other domains besides code generation or optimization, for example, auto-completion, or cache pre-fetching, and building it into a programmable computing environment like Emacs. You could have a tool like “Cursor,” but instead of using LLMs, it uses classical computing and constraint solvers, while taking a fraction of the amount of energy that LLMs use.

    #tech #software #AI #LLM #MachineLearning #NeuralNetwork #ConstraintLogic #ConstraintSolver #LogicProgramming #Prolog #MiniKanren #Emacs #Lisp #Scheme #SchemeLang #ProgramSynthesis

  18. @screwlisp @kentpitman regarding the discussion we had after the #LispyGopherClimate show ended, MiniKanren is logic programming language embedded in Scheme (sort-of like a Prolog implemented in Scheme and coded with S-expressions), and you can use machine leaning methods like neural networks to guide the search tree of the goal solver mechanism. This paper is an example of what I was talking about.

    Even before LLMs were invented, MiniKanren was able to do program synthesis using purely symbolic logic. They developed a prototype called Barliman where you would provide example input->output pairs as constraints, and using a constraint solver, could generalize those examples to a function that generates any output for any input. As a simple example, you could give it the following input-output pairs:

    1. () -> ()
    2. (a) () -> (a)
    3. () (a) -> (a)
    4. (a) (a) -> (a a)

    …and the constraint solver could determine that you are trying to implement the append function for lists and write the code automatically — without LLMs, using purely symbolic logic.

    As you might expect, the solver could be very slow, or even diverge (never returning an answer). The paper I mentioned above talks about using neural networks to try to guide the constraint solver to improve the performance and usefulness of the results returned by the solver.

    Now imagine applying this technique to other domains besides code generation or optimization, for example, auto-completion, or cache pre-fetching, and building it into a programmable computing environment like Emacs. You could have a tool like “Cursor,” but instead of using LLMs, it uses classical computing and constraint solvers, while taking a fraction of the amount of energy that LLMs use.

    #tech #software #AI #LLM #MachineLearning #NeuralNetwork #ConstraintLogic #ConstraintSolver #LogicProgramming #Prolog #MiniKanren #Emacs #Lisp #Scheme #SchemeLang #ProgramSynthesis

  19. @screwlisp @kentpitman regarding the discussion we had after the #LispyGopherClimate show ended, MiniKanren is logic programming language embedded in Scheme (sort-of like a Prolog implemented in Scheme and coded with S-expressions), and you can use machine leaning methods like neural networks to guide the search tree of the goal solver mechanism. This paper is an example of what I was talking about.

    Even before LLMs were invented, MiniKanren was able to do program synthesis using purely symbolic logic. They developed a prototype called Barliman where you would provide example input->output pairs as constraints, and using a constraint solver, could generalize those examples to a function that generates any output for any input. As a simple example, you could give it the following input-output pairs:

    1. () -> ()
    2. (a) () -> (a)
    3. () (a) -> (a)
    4. (a) (a) -> (a a)

    …and the constraint solver could determine that you are trying to implement the append function for lists and write the code automatically — without LLMs, using purely symbolic logic.

    As you might expect, the solver could be very slow, or even diverge (never returning an answer). The paper I mentioned above talks about using neural networks to try to guide the constraint solver to improve the performance and usefulness of the results returned by the solver.

    Now imagine applying this technique to other domains besides code generation or optimization, for example, auto-completion, or cache pre-fetching, and building it into a programmable computing environment like Emacs. You could have a tool like “Cursor,” but instead of using LLMs, it uses classical computing and constraint solvers, while taking a fraction of the amount of energy that LLMs use.

    #tech #software #AI #LLM #MachineLearning #NeuralNetwork #ConstraintLogic #ConstraintSolver #LogicProgramming #Prolog #MiniKanren #Emacs #Lisp #Scheme #SchemeLang #ProgramSynthesis

  20. Ouch, #Guile #Scheme has betrayed me

    I am using Guile-GI the GObject Introspection framework for Guile, and discovered that the eq? predicate sometimes returns #t for two different symbols. Does #GOOPS allow overloading eq? on symbols such that it can return #t on different symbols? If so this seems like a huge problem to me, it completely violates the Scheme language specification. (Or should I ask, is this a “GOOPS oopsie?”)

    Anyway, what happens is this: you can capture a Gtk keyboard event in an event handler, and extract the list of modifier keys pressed on that key event. It looks something like this:

    (lambda (event)
      (let*-values (((state-ok modifiers) (event:get-state event))
                   ((mod-bits)           (modifier-type->number modifiers))
                   ((first-mod)           (car mod-bits)))
        (display "first modifier: ") (write first-mod) (newline)
        (display "is symbol? ") (write (symbol? first-mod)) (newline)
        (display "eq? to 'mod1-mask: ") (write (eq? 'mod1-mask first-mod)) (newline)
        #t
        ))

    And the output of the above event handler, when I press a key with a CJK input method enabled (on latest Linux Mint) is this:

    first modifier: modifier-reserved-25-mask
    is symbol? #t
    eq? to 'mod1-mask: #t

    The fact that (eq? 'mod1-mask 'modifier-reserved-25-mask) when the 'modifier-reserved-25-mask has been obtained from a C-language FFI callback is a pretty bad thing to happen in a Scheme implementation, in my humble opinion.

    #tech #software #Schemacs #SchemeLang #R7RS

  21. #Schemacs Update

    I have partially resolved the issue I mentioned in my #EmacsConf2025 presentation, regarding the closure of a lambda not correctly capturing it’s environment.

    I say “partially” resolved because although my current solution results in correct program behavior, it does not consider let-bound variables inside of the closure. So variables declared locally to the closure using the let keyword will mask variables of the same name in the closure environment. A correct implementation will simply not include those masked variables in the closure environment at all. This can sometimes impact garbage collection, since a closure may be holding a variable which retains a large amount of memory, but that variable not accessible anywhere since it is masked by the let-bound variables in the closure.

    However, I am eager to keep things moving, so I am merging this PR and opening a new issue to resolve the let-bindings problem later. To find out more, see issue #62 on Codeberg.

    #tech #software #Emacs #EmacsConf #Scheme #R7RS #SchemeLang

  22. I decided it was time to add a Code of Conduct to #Schemacs

    https://codeberg.org/ramin_hal9001/schemacs/src/branch/main/CONTRIBUTING.md

    I have been getting a lot of inquiries about Schemacs lately, and so I think it is best to settle on a Code of Conduct (CoC) right now before I find myself in the awkward position of accepting a lot of patches from various people who later on turn out to hate each other. Hopefully we wont ever find ourselves in such a situation, but it is best to be prepared.

    I decided to go with the Contributor Covenant 3.0, which provides a template for a CoC that adapts well to various projects. If I recall correctly, I believe I learned about it from Christine Lemmer-Webber @cwebber on one of her podcasts.

    I am open to recommendations for changes to be made to the CoC as well, I understand that my knowledge of such things is imperfect and I am willing to learn about various opinions regarding codes-of-conduct if anyone is willing to teach me more.

    And of course, #Schemacs is free software, so anyone is welcome to fork it and develop it separately if you disagree with the CoC. But as long as I am the project manager for #Schemacs I will make the executive decision about the CoC.

    #tech #software #CodeOfConduct #Scheme #SchemeLang #R7RS #Lisp #Emacs #FLOSS #FOSS

  23. My EmacsConf presentation on Schemacs

    — will be live in about 5 minutes.

    https://emacsconf.org/2025/watch/dev

    Questions can be posted to this live chat: https://pad.emacsconf.org/2025-schemacs

    EDIT: Thanks for all the great questions everyone. It is so encouraging for me to see that there is so much interest in this project, it really keeps me motivated to keep working on it.

    #tech #software #Emacs #SoftwareDevelopment #EmacsConf #EmacsConf2025 #SchemeLang #R7RS #Scheme

  24. @dpk (chair of the R7RS Scheme programming language standard working group) had designed a new, extensible #R7RS #Scheme pattern matcher library which generates optimal decision trees, and she has written a fantastic blog post about how it works: https://crumbles.blog/posts/2025-11-28-extensible-match-decision-tree.html

    #tech #software #ProgrammingLanguage #SchemeLang

  25. The official steering committee of the Scheme programming language is calling a vote to replace themselves

    Quoting the memo:

    The outgoing Steering Committee was elected in 2009 and successfully oversaw the production and ratification of the R7RS small language report until 2013. Unfortunately, during the protracted initial development of the R7RS large language after that, it fell dormant.

    The current Scheme Working Group resolved in September 2025 to ask the Steering Commitee for a new election because it felt that after such long dormancy the outgoing Steering Committee was no longer able, as a group, to make and implement decisions effectively.

    The Scheme standardization process charter says, ‘The Steering Committee itself shall establish procedures for replacing its members.’ The outgoing Steering Committee unanimously decided to delegate this task to the current Working Group. The Working Group has very closely modelled the procedure to be used this time on the procedure used last time.

    The Working Group has written a statement to candidates and voters explaining what it hopes for in a new steering committee.

    Lobste.rs thread

    #tech #software #Scheme #SchemeLang #ProgrammingLanguage #R7RS #R7RSLarge #Lisp #FunctionalProgramming #Guile #GuileScheme #ChezScheme #ChickenScheme #GambitScheme #RacketLang #Racket

  26. @plantarum hey, I’m the author of Schemacs.

    Yes, there are Emacs-like editors written in a whole other language which make no attempt to clone Emacs Lisp.

    • Lem is a text editor written in Common Lisp, but it relies on SBCL-specific features so you can only build it on SBCL. The nice thing about Lem is that you have access to the entire SBCL ecosystem, which is pretty close to Python in the number of useful packages you can use with it. It uses SDL2 to display. It at one point had an Electron front-end but I think they abandoned that.

    • Edwin is a text editor written in Scheme, and comes bundled with the MIT Scheme implementation, which is compliant with the R7RS-Small Scheme standard. I believe it includes some of the original code used to teach the Scheme course at MIT back in the late 80s, and it is still minimally maintained even today. When I say “minimal,” I mean there have really been almost no new features added to it in like 30 years. It clones Emacs version 18 which was released back in 1992. All the maintainers do is make sure it runs on modern computers, and they otherwise leave it alone.

    • Lite is a text editor implemented and scriptable in Lua on top of a minimal C-language kernel. This makes it more like Emacs, which is Lisp running on top of a small C-language kernel. I think you can even use libluagit5 to JIT-compile your Lua packages, which probably makes it extremely fast.

    That said, I don’t find any of these especially useful because they lack the huge package ecosystem that exists for Emacs. Emacs “apps” that I use all the time include Magit (Git porcelain), Hyperbole (a cross-referencing app), TRAMP (remote access), Org-Mode, Mastodon-Mode, ERC chat, Elfeed (RSS), as well as the built-in Dired, Proced, and Shell modes, plus all the integrations Emacs has for shell utilities like Find, Grep, GPG, SSH, Tar, Zip, and so on. Without these, I would not have nearly as easy a time getting my work done.

    That is why I have put so much effort into cloning Emacs Lisp. I want Emacs users to be able to use the Emacs code they have already written for themselves, and rely on, while being able to transition over to an editor with a better scripting language.

    @llewelly

    #tech #software #Emacs #ProgrammingEditor #CommonLisp #SchemeLang #R7RS #EmacsLisp #Lisp #ComputerProgramming

  27. Why rewriting Emacs is hard,” by @kana

    Yes it is, I can tell you from experience. Of course, I was never under any illusion that it would be easy.

    @kana , a.k.a. “Gudzpoz,” wrote a blog post which was shared on Lobste.rs, and they kindly mention my own Emacs clone Schemacs, though they refer to the old name of it “Gypsum” because they are citing my EmacsConf 2024 presentation done before the name changed.

    It is a pretty good post going over some of the odd details about how Emacs edits text, e.g. the character range is from 0x0 to 0x3FFFFFF rather than the Unicode standard range from 0x0 to 0x10FFFF, issues with using a gap buffer as opposed to a “rope” data structure, attaching metadata (text properties) to strings to render different colors and faces, and issues with Emacs’s own unique flavor of regular expressions in which the \= symbol indicates matching on the point in the buffer. (I did not know about that last one!)

    Apparently, they know these things because they are also working on their own clone of Emacs in Java for the JVM called Juicemacs (the name “Juice” upholding the theme of Java-based applications being named after drinks), and I deduce that their approach is to read through the Emacs C source code to ensure better compatibility. This is now the fourth modern Emacs+EmacsLisp clone that is still under active development that I know of, fascinating work!

    My approach is to clone Emacs well enough to get it to pass regression tests, and I don’t read the C source code, I do black-box testing (because those tests become regression tests for my own source code).

    Also, the goal with the Schemacs project is more to provide a Scheme-based Emacs that is backward-compatible with GNU Emacs. You use Schemacs because you want to program it in Scheme, not Emacs Lisp, but Emacs Lisp is there for you so you can still use your Emacs config. As a result, I will ignore a lot of these fussy details of the GNU Emacs implementation unless it is going to prevent regression tests from passing.

    #tech #software #Emacs #GNUEmacs #Schemacs #EmacsLisp #Lisp #Java #Scheme #R7RS #SchemeLang #LispLang #JavaLang

  28. #Schemacs minor milestone reached

    With pull request #50 the Schemacs Elisp interpreter is now able to load all of two very important Emacs Lisp source files:

    …which are two files that define most of what you could call the the Emacs Lisp “core” language (by which I mean macros like defun and lambda).

    With these files now loaded, I can proceed to the next task, which is implementing enough of the C-level built-in functions in Scheme to be able to run ./lisp/emacs-lisp/cl-lib.el, which is in-turn one of the dependencies for running the Emacs Regression Tests (ERT) suite.

    Once ERT is up and running, it will be much easier for anyone to contribute code to this project as you will just be able to pick a failing regression test and write whatever code is necessary to make it pass.

    #tech #software #Emacs #EmacsLisp #Lisp #Scheme #SchemeLang #R7RS #FOSS #FreeSoftware

  29. My submission to ICFP/SPLASH 2025 was rejected ☹️ . Although if I am honest, the reviewer’s reasons for rejecting it makes perfect sense, I can’t disagree with their decision.

    The work I am doing on Schemacs really isn’t novel in any way at all, it is just a run-of-the-mill engineering project, everything I do has been done before. I mean, there is no need to invent some new technique to solve an already-solved problem. Not really the kind of thing that makes for a good conference paper. The biggest problem, of course, is that the application isn’t complete yet, so there is not much to share.

    Well, my readers here on Mastodon can expect a series of blog posts pretty soon as I re-format my paper for publishing on my blog.

    #tech #software #scheme #r7rs #SchemeLang #ICFP #icfpsplash2025 #splash2025

  30. Thinking of publishing a paper about #Schemacs at ICFP/SPLASH 2025

    …except there is not much in the way of original research. But I have received a lot of positive feedback about my project from the Scheme and Emacs community. So let me ask the Scheme/Emacs fediverse: if you would be interested in using or contributing to a Scheme-based Emacs that is mostly backward-compatible with #GNUEmacs , what is it about this prospect that is most interesting to you?

    Personally, I live inside of Emacs and program most of my personal workflows in Emacs Lisp, though I feel that Scheme is a more interesting and fun language to use when compared to other #Lisp-family languages. So I would just like to be able to use Scheme as the language in which I program all of my personal workflows. Also I am curious if it is possible to write a large application in #R7RS Scheme such that it runs on many different Scheme implementations.

    So does anyone else agree, or are there other things about a prospective Scheme-based Emacs that interest you that might be worth mentioning to a the audience of the Scheme-related chapters of the ICFP?

    I was talking with William Byrd, who is one of the conference organizers of ICFP/SPLASH this year, and he says the committee could possibly accept anything of interest to the Scheme community, for example experience reports and “position papers” (helping others understand an opinion or philosophy on the topic). And they would judge these papers on different criteria than a paper about novel scientific research.

    Anyone feel free to comment, but I am going to ping a few people in particular who seem to have opinions on this, like @dougmerritt @jameshowell @david_megginson @tusharhero @arialdo @lispwitch @cwebber @dpk and also @PaniczGodek who published on GRASP at this conference last year, if I recall correctly.

    #tech #software #FOSS #FLOSS #SchemeLang #ProgrammingLanguage

  31. [SOLVED] Question about how to use Akku packages with Chez Scheme

    I can setup the project to build using Akku-R7RS:

    akku add akku-r7rs;
    akku install;
    ./.akku/env;

    But then how should I build each of the .sld files to binary using the Chez compiler?

    @mdhughes @civodul @wasamasa do any of you know how to do this?

    #tech #software #Scheme #SchemeLang #R7RS #ChezScheme #Akku #AkkuScm #AkkuScheme #AkkuR7RS #Lisp #ComputerProgramming #LispQuestions #LispAskFedi

  32. Progress on my clone of the Emacs Lisp interpreter

    This took me three months (a month longer than I had hoped), but I finally have merged it into the main branch!

    This patch rewrites the Emacs Lisp lexer and parser in Scheme using Scheme code that is 100% compliant with the #R7RS standard, so it should now work across all compliant Scheme implementations. Previously the old parser relied on #Guile -specific regular expressions.

    This patch also implements a new feature where a stack trace is printed when an error occurs. This of course makes debugging much, much easier. Previously the old parser did not keep track of where code evaluation was happening, it simply produced lists without source location information. The new parser constructs an abstract syntax tree (AST) and source locations are attached to the branches of the tree which can be used in error reporting and stack traces.

    Next I will make whatever minor tweaks might be necessary to get my Emacs Lisp interpreter run on other Scheme implementations, in particular MIT Scheme, Gambit, Stklos, and Gauche. I would also like to try to get it running on Chicken and Chez, although these are going to be a bit more tricky.

    Then I will continue with the task of implementing a new declarative GUI library.

    #tech #software #FOSS #FunctionalProgramming #Lisp #Scheme #SchemeLang #EmacsLisp #Emacs #Schemacs #GuileScheme

  33. The #LispyGopherClimate #weekly #tech #podcast for 2025-04-02

    Listen at: https://archives.anonradio.net/202504020000_screwtape.mp3

    This week we will talk about the Unix Philosophy and how it compares and contrasts with whatever one might call the “Emacs Philosophy.”

    The impetus for the discussion is a series of blog posts by @ramin_hal9001 called “Emacs fulfills the UNIX Philosophy”:

    …as well as a fascinating discussion that took place over this past week on ActivityPub on the topic of the Unix philosophy and history of Lisp on Unix in which some very knowledgeable people have contributed anecdotes and facts.

    #technology #programming #SoftwareEngineering #RetroComputing #lisp #r7rs #SchemeLang #UnixPhilosophy

    This weeks #ClimateCrisis #haiku by @kentpitman
    within each of us
    our loved ones, in tiny form,
    caring's innate yield
        company at a distance
        legacy in case of loss

    #senryu #poem #ShortPoem #SmallPoem #SmallPoems

  34. What I don’t like:

    • some stuff breaks “everything is a list” model
    • Common Lisp is not minimal, includes overlapping and legacy stuff

    does #scheme address this?

    @rzeta0 I would say yes, Scheme sort of addresses those issues.

    Scheme’s biggest advantage is that it is minimal enough that you can understand the whole language specification top-to-bottom, inside and out. But that is also it’s greatest drawback: is that it is too minimal to be practical. So for a long time, every single Scheme implementation has a it’s own large and unique set of libraries for solving practical programming problems that were incompatible with other Scheme implementations, making the Scheme ecosystem very fragmented. The Scheme Request for Implementation (SRFI) process is meant to address this fragmentation issue. Fragmentation is still (in my opinion) a pretty big problem, though things are much better than they were 20 years ago.

    The R6RS standard, as I understand it, tried to make Scheme more practical, but it started to become too Common Lisp-like in complexity so it was mostly rejected by the Scheme community — with a few notable exceptions, like the Chez Scheme compiler.

    The next standard, R7RS, split the language into two parts: “R7RS small,” ratified in 2014, which is more like the original minimal core of the Scheme language, but just a few new features, in particular the define-library macro, for modularizing parts of Scheme programs into immutable environment objects. Then they took a collection of “SRFIs” and declared them to be part of the “R7RS large” language standard. The full “large” language specification is not yet fully ratified, even 11 years after the completion of R7RS “small,” but I think the SRFIs they have ratified so far already make the latest Scheme standard a very practical language. The final R7RS standard may end up being larger than Common Lisp, but that is fine with me since it can be almost completely implemented in the R7RS “small” Scheme standard.

    R7RS “small” Scheme, in my opinion, is a powerful but minimal language that exists to implement other languages, but is still useful in it’s own right as a progeny of Lisp. The “R7RS large” language then adds the useful features of larger languages like Python or Common Lisp as a layer on top of the “R7RS small” language.

    The current chair of the R7RS working group is Daphne Preston Kendal, and is often on Mastodon as @dpk . She can tell you if I got anything in this post wrong.

    #tech #software #SchemeLang #R7RS #ProgrammingLanguage

  35. @xameer the “R7RS small” Scheme standard has a full numerical tower built-in, including unbounded integers.

    (- (+ (expt 10 100) 1) (expt 10 100))

    gives you precisely the correct answer without any floating-point operations. Although macros for symbolic computation with optimization that would avoid computation of (expr 10 100) is “an exercise left to the reader.” Haskell might do the optimal computation though thanks to it’s lazy evaluation.

    #tech #computers #software #FunctionalProgramming #Lisp #SchemeLang #Scheme #R7RS

  36. Are you a Lisper? If yes, What made #lisp special in your view?

    @lxsameer a few things:

    • absolute minimum amount of syntax, makes it very easy to understand how the computer sees each part of the program, makes it easy to implement your own parser if you want to.
    • the ability to define your own evaluator for Lisp syntax, also made considerably easier than other languages due to the minimal syntax. This also makes it easy to develop your own tooling, or to modify existing tooling for the language, which brings me to the next point…
    • macro programming: the ability to hack the Lisp compiler itself so that it can run your own evaluator. This allows you to introduce language features when and where you need them, like linting, type checking, literate programming, alternative evaluation strategies (e.g. lazy evaluation, or concurrent evaluation), etc.
    • functional programming: it is based on the mathematics of lambda calculus, which is a very elegant way of defining algorithms and computation. It is also a computer for the “untyped lambda calculus“ which can implement any other typed lambda calculus as a system of macros.
    • homoiconicity, again a feature of the minimal syntax, allows you to express programs as data, and data as programs. This is very useful for serialization and transport across multiple computers.
    • REPL-based development, which is a feature many languages have nowadays (although Lisp invented this feature), allows for rapid prototyping and easier debugging.
    • stability: Lisp languages like Common Lisp and Scheme have changed very little throughout the decades as there is no need to change them. Macro programming makes it so that you don’t need too add new language features all the time, language features become extensions you can import into your project.

    #tech #software #ComputerProgramming #Lisp #CommonLisp #SchemeLang #Scheme #Clojure #FennelLang #GerbilLang #RacketLang

  37. @wingo is asking if anyone knows of a good course on the Nanopass framework (perhaps to recommend to others), but as usual he forgot to add hashtags to his post. So please reply to this post here: https://mastodon.social/@wingo/113956474737820425

    #tech #software #Lisp #Scheme #SchemeLang #R7RS #R6RS #GuileScheme #Guile #Compilers #ProgrammingLanguages #PLT

  38. me, scoffing: dude there’s like a million of those. what, does yours compile on a pregnancy test?

    @garbados I can’t find it now, but someone apparently wrote a #SchemeLang that compiles to Ethereum smart contracts. Being that it runs on the cryptocurrency blockchain, they called it “Pyramid Scheme.” It may have been a joke, but I think someone actually did that.

    I have challenged myself to try to get my large Scheme code base to compile on any #Scheme I can find that claims to conform to the #R7RS standard. So far I can get a significant portion of it to compile on #Guile (reference), Gambit, MIT Scheme, #Gauche, and Chibi. I hope I can get it to build on Chez with the #R7RS compatability layer built-in to the Snow Fort package manager.

  39. If you want to learn Gtk programming

    No matter what language you want to use to program your Gtk app, read the Python tutorial to get started, even if you are not going to write your app in Python.

    So far it has been the most comprehensive and well-written tutorials I have ever seen for Gtk, and explains important concepts even better than the official documentation does. What applies to Gtk programming Python applies to most any other programming language as well, especially scripting languages, so what you learn from this tutorial will apply to your use case as well.

    Gtk is a cross-platform GUI toolkit that serves as infrastructure for Linux/BSDUnix desktop environments like Gnome, Cinnamon, MATE, and Xfce. Gtk apps can build and run on Mac OS and Windows without too much difficulty. Though Gtk is written in C it supports very a wide range of programming languages for application programming such as Python, JavaScript, Ruby, Lua, most of Lisp the Lisp family, Java, Vala, C#, even C++ if you are a masochist. Because of this, it never occurred to me that if I wanted to learn more about Gtk programming, I should read a tutorial for one specific language (Python). Now that I have read it, I wish I had known this sooner, so I am telling everyone here on the fediverse.

    EDIT: I forgot to mention, you can download the entire tutorial locally as HTML, PDF, or EPUB so that you can hack offline as well!

    #tech #software #Linux #FreeBSD #OpenBSD #NetBSD #Gtk #GUI #AppDev #NativeApp #NativeAppDev #GnomeDE #MateDE #CinnamonDE #Xfce #Python #Lua #Lisp #JavaScript #Ruby #Lua #Java #ValaLang #SchemeLang #CPlusPlus #GCC #MacOS #MSWindows

  40. what are the differences between resurrected GuileEmacs that was also announced in EmacsConf2024 and Gypsum? At first glance seems like both projects have the same goal.

    @ram535 thanks for asking! The goal for both projects are similar, but they are achieved in slightly different ways.

    Gypsum is a clone written in Scheme, meaning it is software the behaves exactly like Emacs, but it is written from scratch in a new code base. In this case, it is also being written in a completely different programming language, Scheme instead of C. The larger goal is to have an Emacs that is backward compatible with GNU Emacs but is written in Scheme that runs on any R7RS standard compliant Scheme implementation. There is no C code in this project at all, it is purely Scheme. I would like to also target other compilers such as MIT Scheme, Gambit, Stklos, and possibly Chicken and Larceny as well, though this will be pretty difficult and rely on a lot of cond-expand code. The larger goal is to have an Emacs app platform that encourages the use of the Scheme language for creating applications and text editing work flows, regardless of the underlying compiler.

    @lispwitch ‘s “GuileEmacs” is not a clone, but a fork of both GNU Emacs and GNU Guile, meaning it modifies the existing GNU Emacs code base and some of the Guile source base, replacing some of the C source code in GNU Emacs with other C source code from Guile. Then, the Emacs Lisp interpreter written in C is replaced with an Emacs Lisp interpreter written in Guile Scheme. This allows Emacs Lisp to be JIT compiled using Guile’s JIT compiler, and also make use of all of the Guile software ecosystem to extend Emacs. This is incredibly useful, because there is quite a lot of Guile software, including things like web servers and game engines, and soon it could all be available for use by Emacs programmers. It will probably also be production ready much sooner than my Gypsum project because it only needs to implement the core of Emacs Lisp to work. However, it relies on language features specific to Guile to achieve this, so it is not fully R7RS standards compliant, and will not work on other Scheme implementations.

    #tech #software #Emacs #SchemeLang #Scheme #R7RS #Guile #GuileEmacs #GuileScheme

  41. The parent of this post is my talk at EmacsConf2024 on PeerTube. If you reply to this post here (not the parent), I can see comments on the video and reply to them.

    #tech #software #EmacsConf2024 #EmacsConf #Emacs #Scheme #SchemeLang

  42. I tried using Marc Nieper-Wißkirchen’s Scheme pattern matcher

    It’s called (rapid match), and it is part of the “Rapid Scheme” compiler project, which is a Scheme compiler written in portable, standards-compliant R7RS “small” Scheme.

    It is tiny, only 300 lines of code, and compiles almost instantly. But it lacks features that other pattern matchers might have, especially matching on record data types, becuase the R7RS “small” standard does not provide any mechanism for introspection of record data. Also it cannot assign a whole pattern to a single variable, you can only match variables to elements inside of the pattern, which is unfortunate.

    But it is efficient, and it gets the job done. It will probably cover 80% of all use cases. It’s best features are portability and it’s small footprint. I have decided to use it in my Gypsum software.

    I also discovered that unfortunately Guile Scheme does not fully implement the R7RS standard for library definitions, it is missing the (export (rename from-sym to-sym)) declaration syntax. But I was able to work around it and get (rapid match) to build on Guile.

    #tech #software #SchemeLang #Scheme #R7RS #GuileScheme #Guile #FunctionalProgramming #PatternMatching

  43. I am presenting for #EmacsConf2024

    The presentation is live now, and I am available for questions in the “Big Blue Button” chat room. Feel free to ask me questions here on ActivityPub.

    The project is an implementation of #EmacsLisp written in portable #R7RS standard #Scheme programming language. The reference implementation is written in #GuileScheme

    #tech #software #Emacs #EmacsConf #SchemeLang #Guile

  44. “Question for lispers with experience: If you should start to learn a LISP style language today, which one do you pick up? Why?”

    @syntaxerror The R7RS “Small” Scheme standard is roughly 80 pages, so you can learn about all of the language features very quickly. I love it because of it’s minimalism, it is my preferred language.

    My take on it is that the “Small” Scheme standard is perfectly designed to construct larger programming languages with more features. One such language is R7RS “Large” Scheme, but you could theoretically use it to implement Common Lisp, Python, JavaScript, or any other language.

    The R7RS “Large” standard is still being discussed (10 years after “small” was ratified), but it relies heavily on the “Scheme Request For Implementation“ (SRFI) process to fill out features. The larger portion of the R7RS “Large” standard is already ratified and published, so it is still useful even though it is not complete.

    There are many Scheme implementations, but I recommend Guile, as it is almost completely R7RS-Small compliant, and has a ton of other useful features that come with it out of the box. So if you need, for example, a quick web server, or a way to search your filesystem, Guile has modules for that.

    Another good batteries-included Lisp is Racket, which is a larger language built on top of Chez Scheme (an R6RS Standard Scheme implementation). You can easily install the R7RS Scheme language pack on Racket and write your code in Scheme as you read through the R7RS standard document.

    Both Guile and Racket/CS (Chez Scheme) not only have many useful features, but compile to binary code that runs extremely fast for a high-level language.

    Also, if you haven’t already, try to learn to use Emacs.

    #tech #software #Lisp #CommonLisp #Scheme #SchemeLang #R7RS #Emacs #GuileScheme #RacketLang

  45. 1994 Indiana U., Robert G. Berger: “The Scheme Machine”

    This paper describes the design and implementation of the Scheme Machine, a symbolic computer derived from an abstract Scheme interpreter. The derivation is performed in several transformation passes. First, the interpreter is factored into a compiler and an abstract CPU. Next, the CPU specification is refined so that it can be used with the Digital Design Derivation system. Finally, the DDD system assists in the transformation into hardware. The resulting CPU, implemented in field programmable gate arrays and PALs, is interfaced to a garbage-collecting heap to form a complete Scheme system.

    #Scheme #SchemeLang #LispM #LispMachine