Post by Alexander SchreiberPost by Pascal J. Bourguignon* Programmed in Common Lisp, either the fixnum in the Ariane 5 would have
been converted into a bignum, or an condition would have been
signaled, which could have been handled. This would have taken
time, which could perhaps have "exploded" the real time constraints,
but it is better to control your rocket slughishly than not to
control it at all.
That was not the real problem. The root cause was the design assumption that
overflowing value was _physically_ limited, i.e. during normal operation
it would have been impossible to overflow and an overflow would in fact have
signaled some serious problems bad enough to abort. While this held true in
Ariane 4, it no longer was true in the more powerful Ariane 5.
Your "solution" would have papered over the flawed design assumptions, which
is _not_ the same is fixing them.
You’re forgetting we’re talking about embedded programs with real-time processes.
You don’t have the time to stop everything and “debug” the design.
You have to control a rocket and avoid it crashing!
Who spoke about debugging a live rocket?
Post by Alexander Schreiber* Programmed in Common Lisp, instead of using raw numbers of physical
Post by Pascal J. Bourguignon(+ #<kilometer/hour 5.42> #<foot/fortnight 12857953.0> )
--> #<meter/second 4.7455556>
and Mars Climate Orbiter wouldn't have crashed.
This is ridiculous. If you end up mixing measurement systems (such as metric
and imperial) in the same project, you are _already_ doing it horribly wrong.
It wasn’t in the same project. The data was actually sent from a
remote Earth station. So this is even worse than not using magnitude
with units inside the process, it was a serialization/deserialization
error. But notice how Lisp prints out the speeds above! It writes
the units along with the values!
Now, of course it’s not a programming language question. We already
determined that, when noting that neither the ANSI Common Lisp nor the
ANSI C standard imposes bound checking, but that C programmers don’t
code bound checkings, and C implementers, being C programmers,
implement compilers that don’t do bound checking, while the inverse is
true of Common Lisp programmers.
This is always the same thing: “statically typed” proponents want to
separate the checks from the code, performing (or not) the checks
during design/proof/compilation, while “dynamically typed” proponents
keep the checks inside the code, making the compiler and system
generate and perform all the typing, bounds, etc checks at run-time.
So when a C guy (any statically typed guy) sends data, he expects that
the type and bounds of the data are know (before hand, by both
parties). But when a Lisp guy (any dynamically typed guy) sends data,
he sends it in a syntactic form that explicitely types it, and the
data is parsed, validated, bound checked and typed according to the
transmitted syntax on the receiving end.
Of course, generating C code doesn’t mean that you can’t design your
system in a "dynamically typed” spirit. But this is not the natural
noosphere of the C ecosystem.
Post by Alexander SchreiberThe design fault was mixing measurement systems, which one should
_never_ do on pain of embarassing failure. Papering over this design
screwup with a language environment that _supports_ this (instead of
screaming bloody murder at such nonsense) doesn't really help here.
Again, we are talking about an embedded program, in a real time
system, where you have only seconds of burn stage on re-entry, and
where you DON’T HAVE THE TIME to detect, debug, come back to the
design board, compile and upload a new version!
Again, what is this about live debugging a flying rocket? If you propose
writing your realtime control code and deploying it straight to your
production enviroment (in that case, the rocket about to liftoff) you
have no business writing this kind of code.
You design your system, review the design, implement, test and only
deploy it live if you are confident that it will work correctly (and
the tests agree).
The above problems are things that - at the latest - should have been
caught by the test setups. Preferrably in the design stage. Actually,
IIRc for the Ariane issue there was a test that would have revealed the
problem, but it was cancelled as being too costly. Which in retrospect
was of course penny-wise, pound-foolish.
The software that uploaded the untagged, without units, bit field
*data*, instead of some meaningful *information*, hadn’t even been
completed before the orbiter was in space! It wasn’t developed by the
same team, and wasn’t compiled into the same executable.
Nonetheless, here a lisper would have sent *information* in a sexp,
and dynamic checks and conversions would have been done.
If you will, the design would have been different in the first place!
Still, supporting multiple concurrent measurements systems means adding
complexity. Which is rarely a good idea. So again, the better approach
would have been to make sure to only use _one_ measurement system
(imperial _or_ metric (preferrably metric)) which means you don't need
the measurement system awareness and conversion code in the first place.
To borrow a saying from the car industry: "The cheapest and most
reliable part is the one that isn't there in the first place."
Post by Alexander SchreiberPost by Pascal J. Bourguignon"The defect was as follows: a one-byte counter in a testing
routine frequently overflowed; if an operator provided manual
input to the machine at the precise moment that this counter
overflowed, the interlock would fail."
But why did the counter overflow in the first place? Was it simply
programmer oversight that too small a datatype was used or was this
actually an error that just didn't have noticeable consequences most
of the times? If the later, then again, papering over it with a
never overflowing counter is not a fix.
But it if was a problem, it *would* eventually reach a bound check,
and signal a condition, thus stopping the process of irradiating and
killing people.
Remember: a Lisp program (any "dynamically typed” program) is FULL of
checks!
Post by Alexander SchreiberPost by Pascal J. Bourguignonsince again, incrementing a counter doesn't fucking overflow in
lisp!
* Programmed in Common Lisp, heartbleed wouldn't have occured,
because lisp implementors provide array bound checks, and lisp
programmers are conscious enough to run always with (safety 3), as
previously discussed in this thread.
Hehe, "conscious enough to run always with (safety 3)". Riiiiight.
And nobody was ever tempted to trade a little runtime safety for
speed, correct?
Those are C programmers. You won’t find any other safety that 3 in my
code. You should not find any other safety than 3 in mission critical
code, much less in life threatening code.
There is a _mountain_ of misson critical and/or life threatening code
where "safety 3" is meaningless because it is not written in Lisp.
Post by Alexander SchreiberAs for heartbleed: arguably, the RFC that the broken code
implemented shouldn't have existed in the first place.
Post by Pascal J. BourguignonWhat I'm saying is that there's a mind set out-there, of using
modular arithmetic to approximate arithmetic blindly. Until you
not program with modular arithmetic!
Well, modular arithmetic doesn't go away because one wishes it so.
As a developer doing non time critical high level work one might be
able to cheerfully ignore it, but the moment one writes sufficiently
time critical or low level code one will have to deal with it.
Because modular arithmetic is what your CPU is doing - unless you
happen to have a CPU at hand that does bignums natively at the
register level? No? Funny that.
This might have been true in 1968, when adding a bit of memory added 50 gr. of payload!
Nowadays, there’s no excuse.
Wrong.
If your code is sufficiently time critical, stuff like that begins to
matter. At the leisurely end we have an ECU: your engine runs at, say
6000 rpm so you have an ignition coming up every 20 ms. Your code _has_
to fire the sparkplug at the correct time, with sub-millisecond
precision or you'll eventually wreck the engine. While processing a
realtime sensor data stream (engine intake (air, fuel), combuston
(temperature, pressure), exhaust (temperature, pressure, gas mix) and
others). This is routinely done using CPUs that aren't all that super
powerful, in fact, using the cheapest (and that usually means slowest)
CPUs (or rather: microcontrollers) that are still just fast enough.
For example, the Freescale S12XS engine control chip (injector and
ignition) has 8/12 KB RAM and 128/256 KB flash. You are not going to
muck around with bignums in a constrained environment like that ... ;-)
And there are many, many more of those kind of embedded control systems
around than PCs, tablets and phones (all of them pretty powerful
platforms these days) combined.
At the faster end: networking code handling 10 GBit line speeds. With
latencys in the single to double digit microsecond range, you don't have
the luxury of playing with nice abstract, far-away-from-the-metal code
if you want your packet handling code to run at useful speeds.
Post by Alexander SchreiberPost by Pascal J. BourguignonPost by William LedererAnd if the flight safety of an aircraft depended upon the current
Lisp version of Ironclad's impenetrability, we would be in
trouble.
This is another question, that of the resources invested in a
software ecosystem, and that of programming language mind share.
Why the cryptographists don't write their libraries in Common Lisp
and choose to produce piles of C instead?
Usefulness. If I write a library in C, pretty much everything that
runs on Unix can link to it (if need be, via FFI and friends) and
use it. If I write a library i Common Lisp, then code written in
Common Lisp can use it unless people are willing to do some
interesting contortions (such wrapping it in an RPC server).
Anything running on unix can link to libecl.so (which is ironically a
CL implementation using gcc, but we can assume it’s a temporary
solution).
Post by Alexander SchreiberExercise for the interested: write a library in Common Lisp that
does, say, some random data frobnication and try to use it from: C,
Python, Perl, C++ _without_ writing new interface infrastructure.
But the point is to eliminate code written in C, Perl, C++! So your exercise is academic.
I can very confidently say: This will never happen. Just look at the
amount of _COBOL_ code that is still in use. In fact, people are _still_
writing COBOL (dude I know does just that for a bank).
Kind regards,
Alex.
--
"Opportunity is missed by most people because it is dressed in overalls and
looks like work." -- Thomas A. Edison