It was designed to support a specific Air Force requirement: the ability to launch, release or capture a spy satellite, then return to (approximately) the same launch site, all on a single orbit. (I say 'approximately' because a West Coast launch would have been from Vandenberg Air Force Base, returning to Edwards Air Force Base.)
The cargo bay was sized for military spy satellites (imaging intelligence) such as the KH-11 series, which may have influenced the design of the Hubble Space Telescope. Everything else led on from that.
Without those military requirements, Shuttle would probably never have got funded.
I'm listening to "16 Sunsets", a podcast about Shuttle from the team that made the BBC World Service's "13 Minutes To The Moon" series. (At one point this was slated to be Season 3, but the BBC dropped out.) https://shows.acast.com/16-sunsets/episodes/the-dreamers covers some of the military interaction and funding issues.
You're saying the same thing he is, but with more precise examples. There were also plenty of more useless requirements which is what he was getting at with it being 'designed by committee.' It was also intended to be a 'space tug' to drag things to new orbits, especially from Earth to the Moon, and this is also where its reusable-but-not-really design came from.
It's also relevant that the Space Shuttle came as a tiny segment of what was originally envisioned as a far grander scheme (in large part by Werner von Braun) of complete space expansion and colonization. The Space Shuttle's origins are from the Space Transportation System [1], which was part of a goal to have humans on Mars by no later than 1983. Then Nixon decided to effectively cancel human space projects after we won the Space Race, and so progress in space stagnated for the next half century and we were left with vessels that had design and functionality that no longer had any real purpose.
Actually $9 more to go from 2GB to 4GB, just for the RAM chip itself. See my reply above.
32GB is impossible at present. No-one makes a 256 gigabit (32x8) chip to fit in the same footprint, so then you have to do a new board design to support multiple chips, which might not be doable in the board's current footprint.
Micron do a 192 gigabit (=24GB) chip but it's LPDDR5, rather than LPDDR4x, which I imagine the RPi 5's SoC can't drive.
The problem is that to fit on the existing board, you need a single RAM package with the same number and function of pads. Looking at a sample photo of a board, I can see it has a Micron package labelled D8CJN. Looking that up on Micron's FBGA parts decoder, the full product code is MT53E2G32D4DE-046 WT:C which is a 64Gb (64 gigabit, i.e. 8 gigabyte) LPDDR4x RAM at 2133 MHz, in a 200-ball TFBGA package.
Looking at Micron's range, they now have a 128Gb (16GB) chip with matching specs, which is the MT53E4G32D8CY-046 WT:C. That chip costs £74.66 per chip from Mouser Electronics (just one site I found selling it) in quantities of 1,360.
In contrast the 2GB chip - which I surmise is a MT53E512M32D1ZW-046 - costs £9.26 each in quantities of 250. There's a price break listed for 2,720 of the things, but I'm not going to ask for a quote.
So that's £65.40 more for the larger RAM chip. Which is about $80 on today's mid-market exchange rates.
Looking at the 4GB it's probably a MT53E1G32D2FW-046, which is £16.72 in bulk quantities of 1,360 from the same wholesaler (for the revision C part). So that's an extra £7.46 or about $9.
Why are other systems' RAM upgrades cheaper? Largely because they use more chips, and therefore each chip has less capacity. That means either the chip is less dense, or it's a smaller die, so fewer defects are likely within the one chip, making yields higher. An SO-DIMM might have two or four chips on it, depending on whether it's populated on one side or both; a full-size DIMM might be 8 or 16 (or 9 or 18 if it has ECC).
Obviously I've made the assumption that this is just a drop-in replacement chip on the existing board. But I assume it would be very hard to redesign the board to fit in the same form factor, with all connectors in the same positions, but add on another RAM chip. If the SoC even supports driving more than one RAM chip, which it may not.
TL;DR that's what Micron charge for the larger chip.
There's no way that's what RPI are paying, or even close - it should be somewhere around $7 or so for a 4GB part, $3.60-4 for a 2GB part.
If we use your prices above just as a "scale" then you can already see here that 1X 4GB part is cheaper per GB than 2GB, so as I say it's around $3 in cost to upgrade the base model from 2>4GB.
SQLite is in-process. It never spins up another process or thread. It's just a library. Its blocking I/O means that the thread that called into SQLite can't do anything else until it completes. Though note that SQLite's underlying API is essentially a row-by-row interface - you run a query by calling sqlite3_step(), which returns when the next row has been retrieved.
SQLite does have a page cache, so recently-accessed pages will still be in the cache, allowing for the next result to frequently be returned without stalling. And the operating system's file cache may be reading ahead if it detects a sequential access pattern, so the data may be available to SQLite without blocking even before it requests it. (SQLite defaults to 1KB pages, but the OS may well perform a larger physical read than that into its cache anyway.)
Asynchronous I/O usually isn't actually any faster to complete. Indeed there might be more overhead. The benefit is that you can have fewer threads, if you architect your server around asynchronous I/O. That saves memory on thread stacks and other thread-specific storage. It can also reduce thrashing of CPU cache and context switch overhead, which can be an issue if too many threads are runnable at the same time (i.e. more threads than you have CPU cores.) It might also reduce user/kernel mode transitions.
I wasn't suggesting sqlite itself starts threads. But the quoted sentence suggests the benchmark uses a single-process/multi-thread setup so that there's a thread per tenant ("SQLite gets its own thread per tenant, and in each thread they run the query to measure").
If you look at the map, Shipfusion is 4350 North 5th Street, North Las Vegas. The Circle K is literally next door, on the intersection of North 5th Street and East Craig Road. See https://maps.app.goo.gl/Wz6iNaViGf3ToM1U6.
I think what happened here is that the FedEx driver was in such a hurry to drop off that he saw a bunch of people apparently outside Shipfusion, stopped and said 'Hey, delivery for you', rather than going and finding the proper entrance to the loading dock. And to be fair, that's the corner with the company logo on it - looking at Google Streetview I can't see a logo at the back of the building where the loading dock is!
And to be even more fair to the driver, they are often given utterly unrealistic amounts of time to drive to the next drop, or to complete the drop. They can only complete their rounds by cutting corners.
Indeed - the 'dynamic' comes from 'dynamic logic'. Wikipedia: "It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances." What Dennard realised was that you don't actually need to have a separate capacitor to hold the bit value - the bit value is just held on the stray and gate capacitance of the transistor that switches on when that bit's row and column are selected, causing the stray capacitance to discharge through the output line.
Because of that, the act of reading the bit's value means that the data is destroyed. Therefore one of the jobs of the sense amplifier circuit - which converts the tiny voltage from the bit cell to the external voltage - is to recharge the bit.
But that stray capacitance is so small that it naturally discharges through the high, but not infinite, resistance when the transistor is 'off'. Hence, you have to refresh DRAM, by regularly reading every bit frequently enough that it hasn't discharged before you got to it. Usually you might only need to read every row frequently enough, because there's actually a sense amplifier for each column, reading all the bit values in that row, with the column address strobe just selecting which column bit gets output.
The reason was because it makes subroutine return and stack frame cleanup simpler.
You know this, but background for anyone else:
ARM's subroutine calling convention places the return address in a register, LR (which is itself a general purpose register, numbered R14). To save memory cycles - ARM1 was designed to take advantage of page mode DRAM - the processor features store-multiple and load-multiple instructions, which have a 16-bit bitfield to indicate which registers to store or load, and can be set to increment or decrement before or after each register is stored or loaded.
The easy way to set up a stack frame (the way mandated by many calling conventions that need to unwind the stack) is to use the Store Multiple, Decrement Before instruction, STMDB. Say you need to preserve R8, R9, R10:
STMDB R8-R10, LR
At the end of the function you can clean up the stack and return in a single instruction, Load Multiple with Increment After:
LDMIA R8-R10, PC
This seemed like a good decision to a team producing their first ever processor, on a minimal budget, needing to fit into 25,000 transistors and to keep the thermal design power cool enough to use a plastic package, because a ceramic package would have blown their budget.
Branch prediction wasn't a consideration as it didn't have branch prediction, and register pressure wasn't likely a consideration for a team going from the 3-register 6502, where the registers are far from orthogonal.
Also, it doesn't waste instruction space: you already need 4 bits to encode 14 registers, and it means that you don't need a 'branch indirect' instruction (you just do MOV PC,Rn) nor 'return' (MOV PC,LR if there's no stack frame to restore).
There is a branch instruction, but only so that it can accommodate a 24-bit immediate (implicitly left-shifted by 2 bits so that it actually addresses a 26-bit range, which was enough for the original 26-bit address space). The MOV immediate instruction can only manage up to 12 bits (14 if doing a shift-left with the barrel shifter), so I can see why Branch was included.
Indeed, mentioning the original 26-bit address space: this was because the processor status flags and mode bits were also available to read or write through R15, along with the program counter. A return (e.g. MOV PC,LR) has an additional bit indicating whether to restore the flags and processor state, indicated by an S suffix. If you were returning from an interrupt it was necessary to write "MOVS PC, LR" to ensure that the processor mode and flags were restored.
# It was acceptable in the 80s', It was acceptable at the time... #
ARM1 didn't have a multiply instruction at all, but experimenting with the ARM Evaluation System (an expansion for the BBC Micro) revealed that multiplying in software was just too slow.
ARM2 added the multiply and multiply-accumulate instructions to the instruction set. The implementation just used the Booth recoding, performing the additions through the ALU, and took up to 16 cycles to execute. In other words it only performed one Booth chunk per clock cycle, with early exit if there was no more work to do. And as in your article, it used the carry flag as an additional bit.
I suspect the documentation says 'the carry is unreliable' because the carry behaviour could be different between the ARM2 implementation and ARM7TDMI, when given the same operands. Or indeed between implementations of ARMv4, because the fast multiplier was an optional component if I recall correctly. The 'M' in ARM7TDMI indicates the presence of the fast multiplier.
No, they aren't. GS1 - the organisation that administers retail barcodes - has rules about when to change the Global Trade Item Number for a product. The GTIN is what you probably mean by 'UPC' but GS1 now distinguishes between the abstract product number (the GTIN), and the concrete barcode symbology containing a 12-digit GTIN (UPC-A).
The Guiding Principles are:
* Is a consumer and/or trading partner expected to distinguish the changed or new product from previous/current products?
* Is there a regulatory/liability disclosure requirement to the consumer and/or trading partner?
* Is there a substantial impact to the supply chain (e.g., how the product is shipped, stored, received)?
The UK switched from selling petrol in gallons to litres in the 1980s. I think I just about recall petrol prices suddenly changing dramatically when I was fairly young - I used to help my Dad keep records of how much fuel we'd bought at what price. I'd write down the figures in the book while he went to pay (it was always self-service) so I must have been old enough to be left alone for 5 minutes!
This Energy Institute statistical series - https://knowledge.energyinst.org/search/record?id=58969 - says that their records changed from "new pence per gallon" to "new pence per litre" at the start of 1989. That seems late for my recollection.
Looking back at historical data from https://www.gov.uk/government/statistical-data-sets/oil-and-..., it appears that the average price for "4 star" petrol (97 RON) crossed the £1 per gallon threshold some time in 1979 (Table 4.1.3, and multiply by 4.54609). I'm not old enough to remember that!
By 1989, prices were at 168.8 pence per litre (i.e. £1.68). So I think the story about the change being because it had gone over £1 per gallon has to be a myth. However, retailers certainly weren't complaining about the price displayed being less than one quarter of what it had been! In contrast, they were much less happy about prices per kilogram being more than twice the price per pound (weight).
Prices crossed £1 per litre for 'Premium Unleaded' (95 RON) in November 2007. They fell back below this level in November 2008 but went back up over it in June 2009.
There's obviously a caveat there that the keyboard could have been replaced - but based on the colour of the plastics it would certainly be a similar age.
The keyboard has AlphaLock rather than Caps Lock. I can't find a manual saying exactly what it did! You're right that the Caps Lock key on PC and Mac keyboards only capitalises the alphabetic keys without affecting the symbolic or numeric characters. Some international layouts may still offer a Shift Lock?
The cargo bay was sized for military spy satellites (imaging intelligence) such as the KH-11 series, which may have influenced the design of the Hubble Space Telescope. Everything else led on from that.
Without those military requirements, Shuttle would probably never have got funded.
I'm listening to "16 Sunsets", a podcast about Shuttle from the team that made the BBC World Service's "13 Minutes To The Moon" series. (At one point this was slated to be Season 3, but the BBC dropped out.) https://shows.acast.com/16-sunsets/episodes/the-dreamers covers some of the military interaction and funding issues.