I have a daydream.
-
(cont)
If the reason for the split is "Lua is for 'business logic', C is for high performance stuff", you'd think you'd want the matrix multiplication on the C side. But things will never work out that politely. At some point in a single function you'll wind up going, oh, I gotta mutiply this matrix, it would be really inconvenient to move all this over the line. And invariably, a larger and larger set of utility functions wind up being duplicated 3 times (C, Lua, GLSL).- There is a second problem with "partly in Rust, partly in Lua". Rust is greedy! Rust is a totalizing system. C is pretty easy to interface with other languages; C++, less so. Rust has all kinds of things that don't map well into other languages but are *important*.
Why is everything being rewritten in Rust these days? I have a theory. It's because Rust has all these really great libraries people want to use. And the only way to access them is Rust, since *Rust doesn't interoperate well!*
-
- There is a second problem with "partly in Rust, partly in Lua". Rust is greedy! Rust is a totalizing system. C is pretty easy to interface with other languages; C++, less so. Rust has all kinds of things that don't map well into other languages but are *important*.
Why is everything being rewritten in Rust these days? I have a theory. It's because Rust has all these really great libraries people want to use. And the only way to access them is Rust, since *Rust doesn't interoperate well!*
(cont)
Like, Rust *can* interoperate— as I said, mlua's great. But the interop is such that you have to do a bunch of *work* to expose functionality in Rust primitives to another language.Rust (glossing over many things) was supposed to be a language for writing VMs, for writing *one* VM, Servo. It's designed for interop. It's *designed* for the "the hard parts in this compiled language, easy parts in JS". But the hard parts aren't hard enough. It's easier to just write it *all* in Rust!
-
(cont)
Like, Rust *can* interoperate— as I said, mlua's great. But the interop is such that you have to do a bunch of *work* to expose functionality in Rust primitives to another language.Rust (glossing over many things) was supposed to be a language for writing VMs, for writing *one* VM, Servo. It's designed for interop. It's *designed* for the "the hard parts in this compiled language, easy parts in JS". But the hard parts aren't hard enough. It's easier to just write it *all* in Rust!
(cont)
At least, this is the trap I'm in. I'd like a scripting language, but my scripting languages can't easily access Rust stuff, and a lot of what I want to use is in Rust, so I just use Rust. Suboptimal.Hence, a solution: *The scripting language should use the Rust ABI*. I'm not sure this has ever been done fully. There are several scripting languages which are Rust-flavored or sit atop Rust, I've skimmed docs for a couple. As far as I *know* "just interpret Rust" would be a novel one.
-
(cont)
At least, this is the trap I'm in. I'd like a scripting language, but my scripting languages can't easily access Rust stuff, and a lot of what I want to use is in Rust, so I just use Rust. Suboptimal.Hence, a solution: *The scripting language should use the Rust ABI*. I'm not sure this has ever been done fully. There are several scripting languages which are Rust-flavored or sit atop Rust, I've skimmed docs for a couple. As far as I *know* "just interpret Rust" would be a novel one.
- I've been dealing with "writing in 2 languages means duplication/annoying refactors" a long time.
Thus, my daydream for a long time has been a programming language that's interpreted/compiled dual mode, such that "porting" across the interpreted/compiled line is just a matter of adding types until there are no untyped statements left. (From what I hear Racket tried this, and there were huge problems. Not sure if I should take this as a warning against trying, or a guide for things to avoid).
-
- I've been dealing with "writing in 2 languages means duplication/annoying refactors" a long time.
Thus, my daydream for a long time has been a programming language that's interpreted/compiled dual mode, such that "porting" across the interpreted/compiled line is just a matter of adding types until there are no untyped statements left. (From what I hear Racket tried this, and there were huge problems. Not sure if I should take this as a warning against trying, or a guide for things to avoid).
(cont)
I at one time thought I'd do this by taking an interpreted language and adding types (this was Emily 2). It turns out writing a typechecker is hard! Writing a compiler is hard! I never got the full thing working. Okay, I give up. Somebody already wrote the Rust compiler. I'll go the other direction. Start with a compiled language I like using, give it an interpreted mode. Solve the "compilers hard" and "Rust interop annoying" problems at once! -
(cont)
I at one time thought I'd do this by taking an interpreted language and adding types (this was Emily 2). It turns out writing a typechecker is hard! Writing a compiler is hard! I never got the full thing working. Okay, I give up. Somebody already wrote the Rust compiler. I'll go the other direction. Start with a compiled language I like using, give it an interpreted mode. Solve the "compilers hard" and "Rust interop annoying" problems at once!This thread got ungainly & apparently I needed 1000-character posts to write it coherently. But I have been stewing in these ideas for several years and some *variant* of them for like, 15 years. This might actually happen. Last year I wrote a language interpreter in Rust (a flimsy LISP) and it wasn't so hard. Eventually some version of this will happen. The nice thing is even if it's terrible, it has the defense "well, use it during dev, then rewrite the whole thing in Rust before you ship it".
-
This thread got ungainly & apparently I needed 1000-character posts to write it coherently. But I have been stewing in these ideas for several years and some *variant* of them for like, 15 years. This might actually happen. Last year I wrote a language interpreter in Rust (a flimsy LISP) and it wasn't so hard. Eventually some version of this will happen. The nice thing is even if it's terrible, it has the defense "well, use it during dev, then rewrite the whole thing in Rust before you ship it".
Tangential thought:
A really funny thing is almost everything good about Rust comes down to when it was written. It has excellent LLVM integration because it was written right after LLVM happened. It has a good build system because the build system was written after pip and npm. It has great libraries because all the libraries were written between 2020 and 2024 and so they're all modern. Not sure where I'm going with this but it makes me wonder how Rust will age.
"How Rust will age". Heh.
-
Tangential thought:
A really funny thing is almost everything good about Rust comes down to when it was written. It has excellent LLVM integration because it was written right after LLVM happened. It has a good build system because the build system was written after pip and npm. It has great libraries because all the libraries were written between 2020 and 2024 and so they're all modern. Not sure where I'm going with this but it makes me wonder how Rust will age.
"How Rust will age". Heh.
Tangent:
Imagine this approach for "interpreting" a Rust-like language. You link against LLVM; you *actually compile* the Rust into machine code, in memory; when you want to invoke your script, you simply jump to that code. This is the approach used by Julia and (optionally) GHCi, and it has many advantages.
I wouldn't use it, at least not at first, because it makes your "interpreter" non-embeddable. LLVM is tens of megabytes. It also excludes you from W^X ("no JIT") platforms (iPhone, wasm).
-
Tangent:
Imagine this approach for "interpreting" a Rust-like language. You link against LLVM; you *actually compile* the Rust into machine code, in memory; when you want to invoke your script, you simply jump to that code. This is the approach used by Julia and (optionally) GHCi, and it has many advantages.
I wouldn't use it, at least not at first, because it makes your "interpreter" non-embeddable. LLVM is tens of megabytes. It also excludes you from W^X ("no JIT") platforms (iPhone, wasm).
It is my opinion there is virtually no good reason for an distributed executable to ever be larger than ten megabytes, and for a program without art/sound assets I'd lower that ceiling to something like one to four megabytes. I am aware this is a deeply antiquated opinion but it is the opinion I have. I think this is also the attitude you need if you want to make a language that targets wasm. I have made a serious attempt at JITing C# to ASM.js. There were… there were executable size problems.
-
It is my opinion there is virtually no good reason for an distributed executable to ever be larger than ten megabytes, and for a program without art/sound assets I'd lower that ceiling to something like one to four megabytes. I am aware this is a deeply antiquated opinion but it is the opinion I have. I think this is also the attitude you need if you want to make a language that targets wasm. I have made a serious attempt at JITing C# to ASM.js. There were… there were executable size problems.
@mcc I often think about how a 16MiB 486 was perfectly capable of displaying and editing any document a human could reasonably write. Advances since then have allowed for higher fidelity images, and audio/video playback, and yet software that does neither still consumes gigabytes of RAM. I hate often wondered about adapting a coredump analysis tool to do a top-down breakdown of how the address space is actually used...
-
@mcc I often think about how a 16MiB 486 was perfectly capable of displaying and editing any document a human could reasonably write. Advances since then have allowed for higher fidelity images, and audio/video playback, and yet software that does neither still consumes gigabytes of RAM. I hate often wondered about adapting a coredump analysis tool to do a top-down breakdown of how the address space is actually used...
@kitten_tech I feel like audio/video editing worked really well on circa 2000 computers and I'm not completely certain we do it better now
-
@kitten_tech I feel like audio/video editing worked really well on circa 2000 computers and I'm not completely certain we do it better now
@mcc @kitten_tech Video editing worked ok around then, but more advanced editing was assisted by specialized hardware. This included the switch to HD and DTV sometime between 2001 and 2007 (maybe more but that’s when I worked in the industry). There were some serious limits with the PCI bus. I remember being one of the few people benchmarking PCI latency.
Asside from HD, mpeg4/avc, and encoders, one of the big leaps at the time was the speed of consumer hard drives. -
undefined oblomov@sociale.network shared this topic on