For the best experience on desktop, install the Chrome extension to track your reading on news.ycombinator.com
Hacker Newsnew | past | comments | ask | show | jobs | submit | history | more rwallace's commentsregister

I'm curious, what problems did you have with JavaScript math? I had the impression it was bog standard IEEE 754 same as every other language nowadays, but I have never tried doing any serious calculation in that language.


That sounds interesting! Google doesn't find any mention of that, only https://en.wikipedia.org/wiki/Dynamic_Analysis_and_Replannin... that says DART, far from failing hard, was a huge success. Do you have a link to the thread?


here https://news.ycombinator.com/item?id=31044510

btw, I, too, couldn't find traces of the lisp side of this project after a 15 minute search. I was curious about the fix times and lisp codebase size


Thanks! Right, so the successful version was the one written in Lisp. Okay, that's actually understandable. At that time, managed languages were not widely used, and C is really not a good choice for that kind of project. If those are the two options, I can certainly see how Lisp would do better.


Are you saying regular expressions were inspired by biology? How?



Thankfully, you can turn off cursor blinking on Windows (by turning the blink rate down to zero).


I'm curious, why do you think JavaScript wants add with carry?


Okay, so what's the catch? If other manufacturers rely on the spyware and ads to finance the business, which is how they are able to sell such a high spec TV for $450, how is that vendor able to sell such a TV for $400 with no spyware and ads? Not a rhetorical question; I'm actually curious how they make it work.


I'm curious, having heard of Matlab but never used it, what exactly did MathWorks do to turn everyone away from them?


> The IBM and DEC formats were really ugly and writing programs for them included many pitfalls. When I began to use an IBM PC, with its much more foolproof FP format, all problems disappeared.

I remember the IBM format was based on hexadecimal, and was thus criticized for wobbling precision. I thought the DEC format was pretty similar to the Intel one. What problems did you have with it, that disappeared when you switched to the IBM PC?


Yeah, I don't remember the DEC format being poor, and some of the differences which did exist really felt like "make sure IEEE-754 is gratuitously incompatible with DEC".

Denormals/gradual underflow support in hardware was one of those, for example. DEC generally trapped on those and let a library handle it. Most of the time, gradual underflow bites people by hiding the precision loss rather than being something useful to exploit for speed.

Hardware support for it meant that people tended to "stash" information in there just like they do for NaN-boxing. AutoCAD, for example, was notorious for having a zillion denormals, none of which had anything to do with calculation.

The number of people who ever benefited from gradual underflow was ridiculously tiny while the number of people who suffered performance loss was huge. This was one of those tradeoffs that wasn't worth it for a very long time--only now that hardware is practically free does it not matter.


The DEC single-precision format was close to the IEEE format, the main difference being that it did not have a useful behavior if the exceptions were masked, so they had to be handled correctly.

On the other hand, the double-precision format, in order to reduce the cost in the hardware FPU, had the same exponent range as single-precision, not like the IEEE formats, which have greater exponent ranges in the longer formats.

The exponent range typical for single-precision FP formats is really too small for solving many problems of physical modelling and simulation.

With the DEC FP format, I ran into overflows very frequently, so I had to add various scale factors in many equations.

On IBM PCs, such overflows never appeared, so there was no need to waste time with determining adequate scale factors. The precise reason why floating-point numbers were introduced was to no longer waste time with the scale factors, as it is needed for computations with fixed-point numbers.


I note you said "DEC format". Were you perhaps on PDPs instead of VAXen?

I'm pretty sure that VAX had support for G_floating contemporaneously with the 8087 (I actually think it predates it but I can't prove it). I can see that MicroVAX had support for it back in 1985 which would seem to imply that mainline VAX had it probably for several years prior.


When it was launched, in October 1977, VAX supported only the "D" double-precision floating-point format, which was the same as in the previous DEC computers, like PDP-11, and which had a too small exponent range.

In 1977 there was the first publication about the Intel floating-point format and the standardization work started soon after that.

The first devices that supported in hardware the Intel FP formats, which were later adopted as IEEE 754, were AMD Am9512 in 1979 (second sourced by Intel as 8232), then Intel 8087 in 1980. (At that time Intel and AMD were frequently partners, only several years later, when Intel began to gain tons of money from the IBM PC, they no longer wanted to share that money with anyone, so they severed the links with AMD)

Meanwhile the debates about the future standard continued and DEC acknowledged that their "D" format had a too small exponent range, so they introduced the "G" format in VAX, which was similar, but not identical (different exponent bias) with the format proposed by Intel.

In 1978 DEC VAX did not have the "G" format and in 1982 it had the "G" format, so it was introduced between 1979 and 1981, possibly at about the same time as Intel 8087, but in any case it was not completely compliant with the proposed standard and DEC still hoped that they might succeed to impose their own FP formats.


Are these actual numbers? If so, you've got me curious! How does it work? Do you really have billions of lines of code? How many of those 580k tables are auto generated?


This is a standard SAP system. I just checked the numbers before I put them down here. SAP has been around since the late 80ies and is open source and very little of the code has been officially retired, so the amount of code and number of tables are monotonically growing for 4 decades. Lot's of redundant programs as well. So the code base is huge and the database is of course not completely normalized.

SAPs program environment comes with a built in editor and a relational database, and that the metadata of every table is stored in the database and the program is not stored on the file system but in the database as well. (Which makes tools like github superflous. When you edit code it gets a lock on the database).

So conveniently in SAP there is a database tables which holds all database tables and another table for all fields. Likewise there is a table for all programs and SAP stores all program lines in another database table. So you just need count the number of entries in these tables.

I am not aware of any generated tables, but maybe are some. Maybe 5%. I'd estimate that maybe half of the programs are actually generated code. Whereas "program" probably is not accurate, some of the programs are modules or function pools which are invoked by other programs.


I made some further research today because i wonderd myself. I think the number of generated programs is higher then my estimation. I assess that the number of not generated programs are in the ballpark of 500k and the number of transparent database tables around 300k. The rest of the are views defined on the database but not transparent table.

The numbers are smaller than initially given, but still big enough for not being handled with unstructured databases.


In fairness, there would actually be much merit to this way of writing about an airline trip if the target audience were 19th-century people who had never heard of an airplane except in fanciful speculation. It explains a great deal about air travel and adjacent parts of the setting, that a story written for a 21st-century audience who are already familiar with the setting, naturally skips over.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search:

HN For You