Wednesday, October 20, 2021

Announcing: Fourmilab Blockchain Tools

Fourmilab Blockchain Tools provide a variety of utilities for users, experimenters, and researchers working with blockchain-based cryptocurrencies such as Bitcoin and Ethereum. These are divided into two main categories.

Bitcoin and Ethereum Address Tools

These programs assist in generating, analysing, archiving, protecting, and monitoring addresses on the Bitcoin and Ethereum blockchains. They do not require you run a local node or maintain a copy of the blockchain, and all security-related functions may be performed on an "air-gapped" machine with no connection to the Internet or any other computer.

  • Blockchain Address Generator creates address and private key pairs for both the Bitcoin and Ethereum blockchains, supporting a variety of random generators, address types, and output formats.

  • Multiple Key Manager allows you to split the secret keys associated with addresses into n multiple parts, from which any k ≤ n can be used to reconstruct the original key, allowing a variety of secure custodial strategies.

  • Paper Wallet Utilities includes a Paper Wallet Generator which transforms a list of addresses and private keys generated by the Blockchain Address Generator or parts of keys produced by the Multiple Key Manager into a HTML file which may be printed for off-line "cold storage", and a Cold Storage Wallet Validator that provides independent verification of the correctness of off-line copies of addresses and keys.

  • Cold Storage Monitor connects to free blockchain query services to allow periodic monitoring of a list of cold storage addresses to detect unauthorised transactions which may indicate they have been compromised.

Bitcoin Blockchain Analysis Tools

This collection of tools allows various kinds of monitoring and analysis of the Bitcoin blockchain. They do not support Ethereum. These programs are intended for advanced, technically-oriented users who run their own full Bitcoin Core node on a local computer. Note that anybody can run a Bitcoin node as long as they have a computer with the modest CPU and memory capacity required, plus the very large (and inexorably growing) file storage capacity to archive the entire Bitcoin blockchain. You can run a Bitcoin node without being a "miner", nor need you expose your computer to external accesses from other nodes unless you so wish.

These tools are all read-only monitoring and analysis utilities. They do not generate transactions of any kind, nor do they require unlocked access to the node owner's wallet.

  • Address Watch monitors the Bitcoin blockchain and reports any transactions which reference addresses on a "watch list", either deposits to the address or spending of funds from it. The program may also be used to watch activity on the blockchain, reporting statistics on blocks as they are mined and published.

  • Confirmation Watch examines blocks as they are mined and reports confirmations for a transaction as they arrive.

  • Transaction Fee Watch analyses the transaction fees paid to include transactions in blocks and the reward to miners and produces real-time statistics and log files which may be used to analyse transaction fees over time.

Details

You can download the complete source code distribution, including ready-to-run versions of all of the programs, from the Fourmilab Blockchain Tools home page.

All of this software is licensed under the Creative Commons Attribution-ShareAlike license.

Please see the Fourmilab Blockchain Tools User Guide [PDF] for details or read the complete source code [PDF] in Perl and Python written using the Literate Programming methodology with the nuweb system.

Posted at 16:38 Permalink

Saturday, July 31, 2021

Flashback Version 1.8 Update Released

I have just posted an update, version 1.8, of Flashback, my instant directory tree snapshot utility for Linux and other Unix-like systems. The major change in this release is fixing problems which occurred with file names that contain spaces and characters which have special meanings to the shell, including horrors such as:
File with rogue's gallery: ~`#$&*()\|[]{};"'''<>?!
In addition, Flashback can be configured to use a variety of file compression utilities such as gzip, bzip2, and xz, automatically back up to removable media such as USB drives when inserted, and mirror backups on remote systems with scp.

Posted at 12:45 Permalink

Saturday, May 16, 2020

UNUM 3.2: Updated to Unicode 13

Version 3.2 of UNUM is now available for downloading. Version 3.2 incorporates the Unicode 13.0.0 standard, released on March 10th, 2020. The update to Unicode adds support for four scripts for languages, additional CJK (Chinese, Japanese, and Korean) symbols, 55 new emoji, and symbols from legacy computer and teletext systems and Creative Commons licenses. There are a total of 143,859 characters in 13.0.0, of which 5930 are new since 12.1.0. (UNUM also supports an additional 65 ASCII control characters, which are not assigned graphic code points in the Unicode database.)

This is an incremental update to Unicode. There are no structural changes in how characters are defined in the databases, and other than the presence of the new characters, the operation of UNUM is unchanged.

UNUM also contains a database of HTML named character references (the sequences like “&lt;” you use in HTML source code when you need to represent a character which has a syntactic meaning in HTML or which can't be directly included in a file with the character encoding you're using to write it). There have been no changes to this standard since UNUM 2.2 was released in September 2017, so UNUM 3.2 will behave identically when querying these references except, of course, that numerical references to the new Unicode characters will be interpreted correctly.

UNUM Documentation and Download Page

Posted at 13:26 Permalink

Saturday, May 9, 2020

ISBNiser and ISBNquest Version 2.1 Released

I have just posted version 2.1 of the ISBNiser utility and ISBNquest Web resource. These are utilities which validate, inter-convert, and properly format all varieties of International Standard Book Number (ISBN) specifications. Both utilities have been updated to use the most recent version of the ISBN Range database (Wed, 6 May 2020 14:51:46 CEST), replacing the October 2018 version previously used. The range database is used to parse ISBNs into their components (Prefix, Registration group, Registrant, Publication, and Checksum) and used by these tools to re-format ISBNs with the correct punctuation.

ISBNquest has been updated to use the new Amazon Product Advertising API 5.0 to look up books on Amazon and find title, author, and other information for a book from its ISBN. This replaces the 4.0 version of the API which has been retired and no longer works. The mechanism used to locate Kindle editions of print books has been completely redesigned and should now work for many more (but, due to limitations in the API, not all) books.

There are no user interface changes in either of these utilities, and updating to them should be completely transparent for all human and programmatic queries.

Posted at 20:57 Permalink

Tuesday, April 21, 2020

The Fourmilab Reading List Returns to its Roots

When I began the Fourmilab Reading List in January 2001, it was just that: a list of every book I'd read, updated as I finished books, without any commentary other than, perhaps, availability information and sources for out-of-print works or those from publishers not available through Amazon.com. As the 2000s progressed, I began to add remarks about many of the books, originally limited to one paragraph, but eventually as the years wore on, expanding to full-blown reviews, some sprawling to four thousand words or more and using the book as the starting point for an extended discussion on topics related to its content.

This is, sadly, to employ a term I usually despise, no longer sustainable. My time has become so entirely consumed by system administration tasks on two Web sites, especially one in which I made the disastrous blunder of basing upon WordPress, the most incompetently and irresponsible piece of...software I have ever encountered in more than fifty years of programming; shuffling papers, filling out forms, and other largely government-mandated bullshit (Can I say that here? It's my site! You bet I can.); and creating content for and participating in discussions on the premier anti-social network on the Web for intelligent people around the globe with wide-ranging interests, I simply no longer have the time to sit down, compose. edit, and publish lengthy reviews (in three locations: in the Reading List, here, and at Ratburger.org) of every book I read.

But that hasn't kept me from reading books, which is my major recreation and escape from the grinding banality which occupies most of my time. As a consequence, I have accumulated, as of the present time, a total of no fewer than twenty-four books I've finished which are on the waiting list to be reviewed and posted here, and that doesn't count a few more I've set aside before finishing the last chapter and end material so as not to make the situation even worse and compound the feeling of guilt.

I will no longer post books I've read here, except those for which I write full reviews. If you'd like to keep up with new books as they are posted on the Reading List, subscribe to its RSS feed.

Posted at 16:00 Permalink

Friday, March 27, 2020

Reading List: Collapse

Schlichter, Kurt. Collapse. El Segundo, CA: Kurt Schlichter, 2019. ISBN 978-1-7341993-0-7.
In his 2016 novel People's Republic (November 2018), the author describes North America in the early 2030s, a decade after the present Cold Civil War turned hot and the United States split into the People's Republic of North America (PRNA) on the coasts and the upper Midwest, with the rest continuing to call itself the United States, its capital now in Dallas, purging itself of the “progressive” corruption which was now unleashed without limits in the PRNA. In that book we met Kelly Turnbull, retired from the military and veteran of the border conflicts at the time of the Split, who made his living performing perilous missions in the PRNA to rescue those trapped inside its borders.

In this, the fourth Kelly Turnbull novel (I have not yet read the second, Indian Country, nor the third, Wildfire), the situation in the PRNA has, as inevitably happens in socialist paradises, continued to deteriorate, and by 2035 its sullen population is growing increasingly restive and willing to go to extremes to escape to Mexico, which has built a big, beautiful wall to keep the starving hordes from El Norte overrunning their country. Cartels smuggle refugees from the PRNA into Mexico where they are exploited in factories where they work for peanuts but where, unlike in the PRNA, you could at least buy peanuts.

With its back increasingly to the wall, the PRNA ruling class has come to believe their only hope is what they view as an alliance with China, and the Chinese see as colonisation, subjugation, and a foothold on the American continent. The PRNA and the People's Republic of China have much in common in overall economic organisation, although the latter is patriotic, proud, competent, and militarily strong, while the PRNA is paralysed by progressive self-hate, grievance group conflict, and compelled obeisance to counterfactual fantasies.

China already has assimilated Hawaii from the PRNA as a formal colony, and runs military bases on the West Coast as effectively sovereign territory. As the story opens, the military balance is about to shift toward great peril to the remaining United States, as the PRNA prepares to turn over a nuclear-powered aircraft carrier they inherited in the Split to China, which will allow it to project power in the Pacific all the way to the West Coast of North America. At the same time, a Chinese force appears to be massing to garrison the PRNA West Coast capital of San Francisco, allowing the PRNA to hang on and escalating any action by the United States against the PRNA into a direct conflict with China.

Kelly Turnbull, having earned enough from his previous missions to retire, is looking forward to a peaceful life when he is “invited” by the U.S. Army back onto active duty for one last high-stakes mission within the PRNA. The aircraft carrier, the former Theodore Roosevelt, now re-christened Mao is about to become operational, and Turnbull is to infiltrate a renegade computer criminal, Quentin Welliver, now locked up in a Supermax prison, to work his software magic to destroy the carrier's power plant. Welliver is anything but cooperative, but then Turnbull can be very persuasive, and the unlikely team undertake the perilous entry to the PRNA and on-site hacking of the carrier.

As is usually the case when Kelly Turnbull is involved, things go sideways and highly kinetic, much to the dismay of Welliver, who is a fearsome warrior behind a keyboard, but less so when the .45 hollow points start to fly. Just when everything seems wrapped up, Turnbull and Welliver are “recruited” by the commando team they thought had been sent to extract them for an even more desperate but essential mission: preventing the Chinese fleet from landing in San Francisco.

If you like your thrillers with lots of action and relatively little reflection about what it all means, this is the book for you. Turnbull considers all of the People's Republic slavers and their willing minions as enemies and a waste of biochemicals better used to fertilise crops, and has no hesitation wasting them. The description of the PRNA is often very funny, although when speaking about California, it is already difficult to parody even the current state of affairs. Some references in the book will probably become quickly dated, such as Maxine Waters Pavilion of Social Justice (formerly SoFi Stadium) and the Junipero Serra statue on Interstate 280, whose Christian colonialist head was removed and replaced by an effigy of pre-Split hero Jerry Nadler. There are some delightful whacks at well-deserving figures such as “Vichy Bill” Kristol, founder of the True Conservative Party, which upholds the tradition of defeat with dignity in the PRNA, winning up to 0.4% of the vote and already planning to rally the stalwart aboard its “Ahoy: Cruising to Victory in 2036!” junket.

The story ends with a suitable bang, leaving the question of “what next?” While People's Republic was a remarkably plausible depiction of the situation after the red-blue divide split the country and “progressive” madness went to its logical conclusion, this is more cartoon-like, but great fun nonetheless.

Posted at 16:48 Permalink

Sunday, February 2, 2020

Reading List: Sonic Wind

Ryan, Craig. Sonic Wind. New York: Livewright Publishing, 2018. ISBN 978-0-631-49191-0.
Prior to the 1920s, most aircraft pilots had no means of escape in case of mechanical failure or accident. During World War I, one out of every eight combat pilots was shot down or killed in a crash. Germany experimented with cumbersome parachutes stored in bags in a compartment behind the pilot, but these often failed to deploy properly if the plane was in a spin or became tangled in the aircraft structure after deployment. Still, they did save the lives of a number of German pilots. (On the other hand, one of them was Hermann Göring.) Allied pilots were not issued parachutes because their commanders feared the loss of planes more than pilots, and worried pilots would jump rather than try to save a damaged plane.

From the start of World War II, military aircrews were routinely issued parachutes, and backpack or seat pack parachutes with ripcord deployment had become highly reliable. As the war progressed and aircraft performance rapidly increased, it became clear that although parachutes could save air crew, physically escaping from a damaged plane at high velocities and altitudes was a formidable problem. The U.S. P-51 Mustang, of which more than 15,000 were built, cruised at 580 km/hour and had a maximum speed of 700 km/hour. It was physically impossible for a pilot to escape from the cockpit into such a wind blast, and even if they managed to do so, they would likely be torn apart by collision with the fuselage or tail an instant later. A pilot's only hope was that the plane would slow to a speed at which escape was possible before crashing into the ground, bursting into flames, or disintegrating.

In 1944, when the Nazi Luftwaffe introduced the first operational jet fighter, the Messerschmitt Me 262, capable of 900 km/hour flight, they experimented with explosive-powered ejection seats, but never installed them in this front-line fighter. After the war, with each generation of jet fighters flying faster and higher than the previous, and supersonic performance becoming routine, ejection seats became standard equipment in fighter and high performance bomber aircraft, and saved many lives. Still, by the mid-1950s, one in four pilots who tried to eject was killed in the attempt. It was widely believed that the forces of blasting a pilot out of the cockpit, rapid deceleration by atmospheric friction, and wind blast at transonic and supersonic speeds were simply too much for the human body to endure. Some aircraft designers envisioned “escape capsules” in which the entire crew cabin would be ejected and recovered, but these systems were seen to be (and proved when tried) heavy and potentially unreliable.

John Paul Stapp's family came from the Hill Country of south central Texas, but he was born in Brazil in 1910 while his parents were Baptist missionaries there. After high school in Texas, he enrolled in Baylor University in Waco, initially studying music but then switching his major to pre-med. Upon graduation in 1931 with a major in zoology and minor in chemistry, he found that in the depths of the Depression there was no hope of affording medical school, so he enrolled in an M.A. program in biophysics, occasionally dining on pigeons he trapped on the roof of the biology building and grilled over Bunsen burners in the laboratory. He then entered a Ph.D. program in biophysics at the University of Texas, Austin, receiving his doctorate in 1940. Before leaving Austin, he was accepted by the medical school at the University of Minnesota, which promised him employment as a research assistant and instructor to fund his tuition.

In October 1940, with the possibility that war in Europe and the Pacific might entangle the country, the U.S. began military conscription. When the numbers were drawn from the fishbowl, Stapp's was 15th from the top. As a medical student, he received an initial deferment, but when it expired he joined the regular Army under a special program for medical students. While completing medical school, he would receive private's pay of US$ 32 a month (around US$7000 a year in today's money), which would help enormously with tuition and expenses. In December 1943 Stapp received his M.D. degree and passed the Minnesota medical board examination. He was commissioned as a second lieutenant in the Army Medical Corps and placed on suspended active duty for his internship in a hospital in Duluth, Minnesota, where he delivered 200 babies and assisted in 225 surgeries. He found he delighted in emergency and hands-on medicine. In the fall of 1944 he went on full active duty and began training in field medicine. After training, he was assigned as a medical officer at Lincoln Army Air Field in Nebraska, where he would combine graduate training with hospital work.

Stapp had been fascinated by aviation and the exploits of pioneers such as Charles Lindbergh and the stratospheric balloon explorers of the 1930s, and found working at an air base fascinating, sometimes arranging to ride along in training missions with crews he'd treated in the hospital. In April 1945 he was accepted by the Army School of Aviation Medicine in San Antonio, where he and his class of 150 received intense instruction in all aspects of human physiology relating to flight. After graduation and a variety of assignments as a medical officer, he was promoted to captain and invited to apply to the Aero Medical Laboratory at Wright Field in Dayton, Ohio for a research position in the Biophysics Branch. On the one hand, this was an ideal position for the intellectually curious Stapp, as it would combine his Ph.D. work and M.D. career. On the other, he had only eight months remaining in his service commitment, and he had long planned to leave the Army to pursue a career as a private physician. Stapp opted for the challenge and took the post at Wright.

Starting work, he was assigned to the pilot escape technology program as a “project engineer”. He protested, “I'm a doctor, not an engineer!”, but settled into the work and, being fluent in German, was assigned to review 1200 pages of captured German documents relating to crew ejection systems and their effects upon human subjects. Stapp was appalled by the Nazis' callous human experimentation, but, when informed that the Army intended to destroy the documents after his study was complete, took the initiative to preserve them, both for their scientific content and as evidence of the crimes of those whose research produced it.

The German research and the work of the branch in which Stapp worked had begun to persuade him that the human body was far more robust than had been assumed by aircraft designers and those exploring escape systems. It was well established by experiments in centrifuges at Wright and other laboratories that the maximum long-term human tolerance for acceleration (g-force) without special equipment or training was around six times that of Earth's gravity, or 6 g. Beyond that, subjects would lose consciousness, experience tissue damage due to lack of blood flow, or structural damage to the skeleton and/or internal organs. However, a pilot ejecting from a high performance aircraft experienced something entirely different from a subject riding in a centrifuge. Instead of a steady crush by, say, 6 g, the pilot would be subjected to much higher accelerations, perhaps on the order of 20—40 g, with an onset of acceleration (“jerk”) of 500 g per second. The initial blast of the mortar or rockets firing the seat out of the cockpit would be followed by a sharp pulse of deceleration as the pilot was braked from flight speed by air friction, during which he would be subjected to wind blast potentially ten times as strong as any hurricane. Was this survivable at all, and if so, what techniques and protective equipment might increase a pilot's chances of enduring the ordeal?

While pondering these problems and thinking about ways to research possible solutions under controlled conditions, Stapp undertook another challenge: providing supplemental oxygen to crews at very high altitudes. Stapp volunteered as a test subject as well as medical supervisor and began flight tests with a liquid oxygen breathing system on high altitude B-17 flights. Crews flying at these altitudes in unpressurised aircraft during World War II and afterward had frequently experienced symptoms similar to “the bends” (decompression sickness) which struck divers who ascended too quickly from deep waters. Stapp diagnosed the cause as identical: nitrogen dissolved in the blood coming out of solution as bubbles and pooling in joints and other bodily tissues. He devised a procedure of oxygen pre-breathing, where crews would breathe pure oxygen for half an hour before taking off on a high altitude mission, which completely eliminated the decompression symptoms. The identical procedure is used today by astronauts before they begin extravehicular activities in space suits using pure oxygen at low pressure.

From the German documents he studied, Stapp had become convinced that the tool he needed to study crew escape was a rocket propelled sled, running on rails, with a brake mechanism that could be adjusted to provide a precisely calibrated deceleration profile. When he learned that the Army was planning to build such a device at Muroc Army Air Base in California, he arranged to be put in charge of Project MX-981 with a charter to study the “effects of deceleration forces of high magnitude on man”. He arrived at Muroc in March 1947, along with eight crash test dummies to be used in the experiments. If Muroc (now Edwards Air Force Base) of the era was legendary for its Wild West accommodations (Chuck Yeager would not make his first supersonic flight there until October of that year), the North Base, where Stapp's project was located, was something out of Death Valley Days. When Stapp arrived to meet his team of contractors from Northrop Corporation they struck the always buttoned-down Stapp like a “band of pirates”. He also discovered the site had no electricity, no running water, no telephone, and no usable buildings. The Army, preoccupied with its glamourous high speed aviation projects, had neither interest in what amounted to a rocket powered train with a very short track, nor much inclination to provide it the necessary resources. Stapp commenced what he came to call the Battle of Muroc, mastering the ancient military art of scrounging and exchanging favours to get the material he needed and the work done.

As he settled in at Muroc and became acquainted with his fellow denizens of the desert, he was appalled to learn that the Army provided medical care only for active duty personnel, and that civilian contractors and families of servicemen, even the exalted test pilots, had to drive 45 miles to the nearest clinic. He began to provide informal medical care to all comers, often making house calls in the evening hours on his wheezing scooter, in return for home cooked dinners. This built up a network of people who owed him favours, which he was ready to call in when he needed something. He called this the “Curbstone Clinic”, and would continue the practice throughout his career. After some shaky starts and spectacular failures due to unreliable surplus JATO rockets, the equipment was ready to begin experiments with crash test dummies.

Stapp had always intended that the tests with dummies would be simply a qualification phase for later tests with human and animal subjects, and he would ask no volunteer to do something he wouldn't try himself. Starting in December, 1947, Stapp personally made increasingly ambitious runs on the sled, starting at “only” 10 g deceleration and building to 35 g with an onset jerk of 1000 g/second. The runs left him dizzy and aching, but very much alive and quick to recover. Although far from approximating the conditions of ejection from a supersonic fighter, he had already demonstrated that the Air Force's requirements for cockpit seats and crew restraints, often designed around a 6 g maximum shock, were inadequate and deadly. Stapp was about to start making waves, and some of the push-back would be daunting. He was ordered to cease all human experimentation for at least three months.

Many Air Force officers (for the Air Force had been founded in September 1947 and taken charge of the base) would have saluted and returned to testing with instrumented dummies. Stapp, instead, figured out how to obtain thirty adult chimpanzees, along with the facilities needed to house and feed them, and resumed his testing, with anæsthetised animals, up to the limits of survival. Stapp was, and remained throughout his career, a strong advocate for the value of animal experimentation. It was a grim business, but at the time Muroc was frequently losing test pilots at the rate of one a week, and Stapp believed that many of these fatalities were unnecessary and could be avoided with proper escape and survival equipment, which could only be qualified through animal and cautious human experimentation.

By September 1949, approval to resume human testing was given, and Stapp prepared for new, more ambitious runs, with the subject facing forward on the sled instead of backward as before, which would more accurately simulate the forces in an ejection or crash and expose him directly to air blast. He rapidly ramped up the runs, reaching 32 g without permanent injury. To avoid alarm on the part of his superiors in Dayton, a “slight error” was introduced in the reports he sent: all g loads from the runs were accidentally divided by two.

Meanwhile, Stapp was ramping up his lobbying for safer seats in Air Force transport planes, arguing that the existing 6 g forward facing seats and belts were next to useless in many survivable crashes. Finally, with the support of twenty Air Force generals, in 1950 the Air Force adopted a new rear-facing standard seat and belt rated for 16 g which weighed only two pounds more than those it replaced. The 16 g requirement (although not the rearward-facing orientation, which proved unacceptable to paying customers) remains the standard for airliner seats today, seven decades later.

In June, 1951, Stapp made his final run on the MX-981 sled at what was now Edwards Air Force Base, decelerating from 180 miles per hour (290 km/h) to zero in 31 feet (9.45 metres), at 45.4 g, a force comparable to many aircraft and automobile accidents. The limits of the 2000 foot track (and the human body) had been reached. But Stapp was not done: the frontier of higher speeds remained. Shortly thereafter, he was promoted to lieutenant colonel and given command of what was called the Special Projects Section of the Biophysics Branch of the Aero Medical Laboratory. He was reassigned to Holloman Air Force Base in New Mexico, where the Air Force was expanding its existing 3500 foot rocket sled track to 15,000 feet (4.6 km), allowing testing at supersonic speeds. (The Holloman High Speed Test Track remains in service today, having been extended in a series of upgrades over the years to a total of 50,917 feet (15.5 km) and a maximum speed of Mach 8.6, or 2.9 km/sec [6453 miles per hour].)

Northrop was also contractor for the Holloman sled, and devised a water brake system which would be more reliable and permit any desired deceleration profile to be configured for a test. An upgraded instrumentation system would record photographic and acceleration measurements with much better precision than anything at Edwards. The new sled was believed to be easily capable of supersonic speeds and was named Sonic Wind. By March 1954, the preliminary testing was complete and Stapp boarded the sled. He experienced a 12 g acceleration to the peak speed of 421 miles per hour, then 22 g deceleration to a full stop, all in less than eight seconds. He walked away, albeit a little wobbly. He had easily broken the previous land speed record of 402 miles per hour and become “the fastest man on Earth.” But he was not done.

On December 10th, 1954, Stapp rode Sonic Wind, powered by nine solid rocket motors. Five seconds later, he was travelling at 639 miles per hour, faster than the .45 ACP round fired by the M1911A1 service pistol he was issued as an officer, around Mach 0.85 at the elevation of Holloman. The water brakes brought him to a stop in 1.37 seconds, a deceleration of 46.2 g. He survived, walked away (albeit just few steps to the ambulance), and although suffering from vision problems for some time afterward, experienced no lasting consequences. It was estimated that the forces he survived were equivalent to those from ejecting at an altitude of 36,000 feet from an airplane travelling at 1800 miles per hour (Mach 2.7). As this was faster than any plane the Air Force had in service or on the drawing board, he proved that, given a suitable ejection seat, restraints, and survival equipment, pilots could escape and survive even under these extreme circumstances. The Big Run, as it came to be called, would be Stapp's last ride on a rocket sled and the last human experiment on the Holloman track. He had achieved the goal he set for himself in 1947: to demonstrate that crew survival in high performance aircraft accidents was a matter of creative and careful engineering, not the limits of the human body. The manned land speed record set on the Big Run would stand until October 1983, when Richard Noble's jet powered Thrust2 car set a new record of 650.88 miles per hour in the Nevada desert. Stapp remarked at the time that Noble had gone faster but had not, however, stopped from that speed in less than a second and a half.

From the early days of Stapp's work on human tolerance to deceleration, he was acutely aware that the forces experienced by air crew in crashes were essentially identical to those in automobile accidents. As a physician interested in public health issues, he had noted that the Air Force was losing more personnel killed in car crashes than in airplane accidents. When the Military Air Transport Service (MATS) adopted his recommendation and installed 16 g aft-facing seats in its planes, deaths and injuries from crashes had fallen by two-thirds. By the mid 1950s, the U.S. was suffering around 35,000 fatalities per year in automobile accidents—comparable to a medium-sized war—year in and year out, yet next to nothing had been done to make automobiles crash resistant and protect their occupants in case of an accident. Even the simplest precaution of providing lap belts, standard in aviation for decades, had not been taken; seats were prone to come loose and fly forward even in mild impacts; steering columns and dashboards seemed almost designed to impale drivers and passengers; and “safety” glass often shredded the flesh of those projected through it in a collision.

In 1954, Stapp turned some of his celebrity as the fastest man on Earth toward the issue of automobile safety and organised, in conjunction with the Society of Automotive Engineers (SAE), the first Automobile Crash Research Field Demonstration and Conference, which was attended by representatives of all of the major auto manufacturers, medical professional societies, and public health researchers. Stapp and the SAE insisted that the press be excluded: he wanted engineers from the automakers free to speak without fear their candid statements about the safety of their employers' products would be reported sensationally. Stapp conducted a demonstration in which a car was towed into a fixed barrier at 40 miles an hour with two dummies wearing restraints and two others just sitting in the seats. The belted dummies would have walked away, while the others flew into the barrier and would have almost certainly been killed. It was at this conference that many of the attendees first heard the term “second collision”. In car crashes, it was often not the crash of the car into another car or a barrier that killed the occupants: it was their colliding with dangerous items within the vehicle after flying loose following the initial impact.

Despite keeping the conference out of the press, word of Stapp's vocal advocacy of automobile safety quickly reached the auto manufacturers, who were concerned both about the marketing impact of the public becoming aware not only of the high level of deaths on the highways but also the inherent (and unnecessary) danger of their products to those who bought them, and also the bottom-line impact of potential government-imposed safety mandates. Auto state congressmen got the message, and the Air Force heard it from them: the Air Force threatened to zero out aeromedical research funding unless car crash testing was terminated. It was.

Still, the conferences continued (they would eventually be renamed “Stapp Car Crash Conferences”), and Stapp became a regular witness before congressional committees investigating automobile safety. Testifying about whether it was appropriate for Air Force funds to be used in studying car crashes, in 1957 he said, “I have done autopsies on aircrew members who died in airplane crashes. I have also performed autopsies on aircrew members who died in car crashes. The only conclusion I could come to is that they were just as dead after a car crash as they were after an airplane crash.” He went on to note that simply mandating seatbelts in Air Force ground vehicles would save around 125 lives a year, and if they were installed and used by the occupants of all cars in the U.S., around 20,000 lives—more than half the death toll—could be saved. When he appeared before congress, he bore not only the credentials of a medical doctor, Ph.D. in biophysics, Air Force colonel, but the man who had survived more violent decelerations equivalent to a car crash than any other human.

It was not until the 1960s that a series of mandates were adopted in the U.S. which required seat belts, first in the front seat and eventually for all passengers. Testifying in 1963 at a hearing to establish a National Accident Prevention Center, Stapp noted that the Air Force, which had already adopted and required the use of seat belts, had reduced fatalities in ground vehicle accidents by 50% with savings estimated at US$ 12 million per year. In September 1966, President Lyndon Johnson signed two bills, the National Traffic and Motor Vehicle Safety Act and the Highway Safety Act, creating federal agencies to research vehicle safety and mandate standards. Standing behind the president was Colonel John Paul Stapp: the long battle was, if not won, at least joined.

Stapp had hoped for a final promotion to flag rank before retirement, but concluded he had stepped on too many toes and ignored too many Pentagon directives during his career to ever wear that star. In 1967, he was loaned by the Air Force to the National Highway Traffic Safety Administration to continue his auto safety research. He retired from the Air Force in 1970 with the rank of full colonel and in 1973 left what he had come to call the “District of Corruption” to return to New Mexico. He continued to attend and participate in the Stapp Car Crash Conferences, his last being the Forty-Third in 1999. He died at his home in Alamogordo, New Mexico in November that year at the age of 89.

In his later years, John Paul Stapp referred to the survivors of car crashes who would have died without the equipment designed and eventually mandated because of his research as “the ghosts that never happened”. In 1947, when Stapp began his research on deceleration and crash survival, motor vehicle deaths in the U.S. were 8.41 per 100 million vehicle miles travelled (VMT). When he retired from the Air Force in 1970, after adoption of the first round of seat belt and auto design standards, they had fallen to 4.74 (which covers the entire fleet, many of which were made before the adoption of the new standards). At the time of his death in 1999, fatalities per 100 million VMT were 1.55, an improvement in safety of more than a factor of five. Now, Stapp was not solely responsible for this, but it was his putting his own life on the line which showed that crashes many considered “unsurvivable” were nothing of the sort with proper engineering and knowledge of human physiology. There are thousands of aircrew and tens or hundreds of thousands of “ghosts that never happened” who owe their lives to John Paul Stapp. Maybe you know one; maybe you are one. It's worth a moment remembering and giving thanks to the largely forgotten man who saved them.

Posted at 17:00 Permalink

Tuesday, January 7, 2020

UNUM 3.1: Updated to Unicode 12.1.0, UTF-8 Support Added

Version 3.1 of UNUM is now available for downloading. Version 3.1 incorporates the Unicode 12.1.0 standard, released on May 7th, 2019. Since the Unicode 11.0.0 standard supported by UNUM 3.0, a total of 555 new characters have been added, for a total of 137,929 characters. Unicode 12.0.0 added support for 4 new scripts (for a total of 150) and 61 new emoji characters. Unicode 12.1.0 added the single character U+32FF, the Japanese character for the Reiwa era. (In addition to the standard Unicode characters, UNUM also supports an additional 65 ASCII control characters, which are not assigned graphic code points in the Unicode database.)

This is an incremental update to Unicode. There are no structural changes in how characters are defined in the databases, and other than the presence of the new characters, the operation of UNUM is unchanged. There have been no changes to the HTML named character reference standard since the release of UNUM version 2.2 in September 2017, so UNUM 3.1 is identical in this regard.

UNUM 3.1 adds support for the UTF-8 encoding of Unicode, and allows specification of characters as UTF-8 encoded byte streams expressed as numbers, for example:

    $ unum utf8=0xE298A2
       Octal  Decimal      Hex        HTML    Character   Unicode
      023042     9762   0x2622     &#9762;    "☢"         RADIOACTIVE SIGN
A new --utf8 option displays the UTF-8 encoding of characters as a hexadecimal byte stream:
  $ unum --utf8 h=sum
     Octal  Decimal      Hex        HTML       UTF-8      Character   Unicode
    021021     8721   0x2211 &Sum;,&sum;    0xE28891      "∑"         N-ARY SUMMATION
UNUM Documentation and Download Page

Posted at 15:53 Permalink

Reading List: The Simulation Hypothesis

Virk, Rizwan. The Simulation Hypothesis. Cambridge, MA: Bayview Books, 2019. ISBN 978-0-9830569-0-4.
Before electronic computers had actually been built, Alan Turing mathematically proved a fundamental and profound property of them which has been exploited in innumerable ways as they developed and became central to many of our technologies and social interactions. A computer of sufficient complexity, which is, in fact, not very complex at all, can simulate any other computer or, in fact, any deterministic physical process whatsoever, as long as it is understood sufficiently to model in computer code and the system being modelled does not exceed the capacity of the computer—or the patience of the person running the simulation. Indeed, some of the first applications of computers were in modelling physical processes such as the flight of ballistic projectiles and the hydrodynamics of explosions. Today, computer modelling and simulation have become integral to the design process for everything from high-performance aircraft to toys, and many commonplace objects in the modern world could not have been designed without the aid of computer modelling. It certainly changed my life.

Almost as soon as there were computers, programmers realised that their ability to simulate, well…anything made them formidable engines for playing games. Computer gaming was originally mostly a furtive and disreputable activity, perpetrated by gnome-like programmers on the graveyard shift while the computer was idle, having finished the “serious” work paid for by unimaginative customers (who actually rose before the crack of noon!). But as the microelectronics revolution slashed the size and price of computers to something individuals could afford for their own use (or, according to the computer Puritans of the previous generations, abuse), computer gaming came into its own. Some modern computer games have production and promotion budgets larger than Hollywood movies, and their characters and story lines have entered the popular culture. As computer power has grown exponentially, games have progressed from tic-tac-toe, through text-based adventures, simple icon character video games, to realistic three dimensional simulated worlds in which the players explore a huge world, interact with other human players and non-player characters (endowed with their own rudimentary artificial intelligence) within the game, and in some games and simulated worlds, have the ability to extend the simulation by building their own objects with which others can interact. If your last experience with computer games was the Colossal Cave Adventure or Pac-Man, try a modern game or virtual world—you may be amazed.

Computer simulations on affordable hardware are already beginning to approach the limits of human visual resolution, perception of smooth motion, and audio bandwidth and localisation, and some procedurally-generated game worlds are larger than a human can explore in a million lifetimes. Computer power is forecast to continue to grow exponentially for the foreseeable future and, in the Roaring Twenties, permit solving a number of problems through “brute force”—simply throwing computing power and massive data storage capacity at them without any deeper fundamental understanding of the problem. Progress in the last decade in areas such as speech recognition, autonomous vehicles, and games such as Go are precursors to what will be possible in the next.

This raises the question of how far it can go—can computer simulations actually approach the complexity of the real world, with characters within the simulation experiencing lives as rich and complex as our own and, perhaps, not even suspect they're living in a simulation? And then, we must inevitably speculate whether we are living in a simulation, created by beings at an outer level (perhaps themselves many levels deep in a tree of simulations which may not even have a top level). There are many reasons to suspect that we are living in a simulation; for many years I have said it's “more likely than not”, and others, ranging from Stephen Hawking to Elon Musk and Scott Adams, have shared my suspicion. The argument is very simple.

First of all, will we eventually build computers sufficiently powerful to provide an authentic simulated world to conscious beings living within it? There is no reason to doubt that we will: no law of physics prevents us from increasing the power of our computers by at least a factor of a trillion from those of today, and the lesson of technological progress has been that technologies usually converge upon their physical limits and new markets emerge as they do, using their capabilities and funding further development. Continued growth in computing power at the rate of the last fifty years should begin to make such simulations possible some time between 2030 and the end of this century.

So, when we have the computing power, will we use it to build these simulations? Of course we will! We have been building simulations to observe their behaviour and interact with them, for ludic and other purposes, ever since the first primitive computers were built. The market for games has only grown as they have become more complex and realistic. Imagine what if will be like when anybody can create a whole society—a whole universe—then let it run to see what happens, or enter it and experience it first-hand. History will become an experimental science. What would have happened if the Roman empire had discovered the electromagnetic telegraph? Let's see!—and while we're at it, run a thousand simulations with slightly different initial conditions and compare them.

Finally, if we can create these simulations which are so realistic the characters within them perceive them as their real world, why should we dare such non-Copernican arrogance as to assume we're at the top level and not ourselves within a simulation? I believe we shouldn't, and to me the argument that clinches it is what I call the “branching factor”. Just as we will eventually, indeed, I'd say, inevitably, create simulations as rich as our own world, so will the beings within them create their own. Certainly, once we can, we'll create many, many simulations: as many or more as there are running copies of present-day video games, and the beings in those simulations will as well. But if each simulation creates its own simulations in a number (the branching factor) even a tiny bit larger than one, there will be exponentially more observers in these layers on layers of simulations than at the top level. And, consequently, as non-privileged observers according to the Copernican Principle, it is not just more likely than not, but overwhelmingly probable that we are living in a simulation.

The author of this book, founder of Play Labs @ MIT, a start-up accelerator which works in conjunction with the MIT Game Lab, and producer of a number of video games, has come to the same conclusion, and presents the case for the simulation hypothesis from three perspectives: computer science, physics, and the unexplained (mysticism, esoteric traditions, and those enduring phenomena and little details which don't make any sense when viewed from the conventional perspective but may seem perfectly reasonable once we accept we're characters in somebody else's simulation).

Computer Science. The development of computer games is sketched from their origins to today's three-dimensional photorealistic multiplayer environments into the future, where virtual reality mediated by goggles, gloves, and crude haptic interfaces will give way to direct neural interfaces to the brain. This may seem icky and implausible, but so were pierced lips, eyebrows, and tongues when I was growing up, and now I see them everywhere, without the benefit of directly jacking in to a world larger, more flexible, and more interesting than the dingy one we inhabit. This is sketched in eleven steps, the last of which is the Simulation Point, where we achieve the ability to create simulations which “are virtually indistinguishable from a base physical reality.” He describes, “The Great Simulation is a video game that is so real because it is based upon incredibly sophisticated models and rendering techniques that are beamed directly into the mind of the players, and the actions of artificially generated consciousness are indistinguishable from real players.” He identifies nine technical hurdles which must be overcome in order to arrive at the Simulation Point. Some, such as simulating a sufficiently large world and number of players, are challenging but straightforward scaling up of things we're already doing, which will become possible as computer power increases. Others, such as rendering completely realistic objects and incorporating physical sensations, exist in crude form today but will require major improvements we don't yet know how to build, while technologies such as interacting directly with the human brain and mind and endowing non-player characters within the simulation with consciousness and human-level intelligence have yet to be invented.

Physics. There are a number of aspects of the physical universe, most revealed as we have observed at very small and very large scales, and at speeds and time intervals far removed from those with which we and our ancestors evolved, that appear counterintuitive if not bizarre to our expectations from everyday life. We can express them precisely in our equations of quantum mechanics, special and general relativity, electrodynamics, and the standard models of particle physics and cosmology, and make predictions which accurately describe our observations, but when we try to understand what is really going on or why it works that way, it often seems puzzling and sometimes downright weird.

But as the author points out, when you view these aspects of the physical universe through the eyes of a computer game designer or builder of computer models of complex physical systems, they look oddly familiar. Here is how I expressed it thirteen years ago in my 2006 review of Leonard Susskind's The Cosmic Landscape:

What would we expect to see if we inhabited a simulation? Well, there would probably be a discrete time step and granularity in position fixed by the time and position resolution of the simulation—check, and check: the Planck time and distance appear to behave this way in our universe. There would probably be an absolute speed limit to constrain the extent we could directly explore and impose a locality constraint on propagating updates throughout the simulation—check: speed of light. There would be a limit on the extent of the universe we could observe—check: the Hubble radius is an absolute horizon we cannot penetrate, and the last scattering surface of the cosmic background radiation limits electromagnetic observation to a still smaller radius. There would be a limit on the accuracy of physical measurements due to the finite precision of the computation in the simulation—check: Heisenberg uncertainty principle—and, as in games, randomness would be used as a fudge when precision limits were hit—check: quantum mechanics.

Indeed, these curious physical phenomena begin to look precisely like the kinds of optimisations game and simulation designers employ to cope with the limited computer power at their disposal. The author notes, “Quantum Indeterminacy, a fundamental principle of the material world, sounds remarkably similar to optimizations made in the world of computer graphics and video games, which are rendered on individual machines (computers or mobile phones) but which have conscious players controlling and observing the action.”

One of the key tricks in complex video games is “conditional rendering”: you don't generate the images or worry about the physics of objects which the player can't see from their current location. This is remarkably like quantum mechanics, where the act of observation reduces the state vector to a discrete measurement and collapses its complex extent in space and time into a known value. In video games, you only need to evaluate when somebody's looking. Quantum mechanics is largely encapsulated in the tweet by Aatish Bhatia, “Don't look: waves. Look: particles.” It seems our universe works the same way. Curious, isn't it?

Similarly, games and simulations exploit discreteness and locality to reduce the amount of computation they must perform. The world is approximated by a grid, and actions in one place can only affect neighbours and propagate at a limited speed. This is precisely what we see in field theories and relativity, where actions are local and no influence can propagate faster than the speed of light.

The unexplained. Many esoteric and mystic traditions, especially those of the East such as Hinduism and Buddhism, describe the world as something like a dream, in which we act and our actions affect our permanent identity in subsequent lives. Western traditions, including the Abrahamic religions, see life in this world as a temporary thing, where our acts will be judged by a God who is outside the world. These beliefs come naturally to humans, and while there is little or no evidence for them in conventional science, it is safe to say that far more people believe and have believed these things and have structured their lives accordingly than those who have adopted the strictly rationalistic viewpoint one might deduce from deterministic, reductionist science.

And yet, once again, in video games we see the emergence of a model which is entirely compatible with these ancient traditions. Characters live multiple lives, and their actions in the game cause changes in a state (“karma”) which is recorded outside the game and affects what they can do. They complete quests, which affect their karma and capabilities, and upon completing a quest, they may graduate (be reincarnated) into a new life (level), in which they retain their karma from previous lives. Just as players who exist outside the game can affect events and characters within it, various traditions describe actors outside the natural universe (hence “supernatural”) such as gods, angels, demons, and spirits of the departed, interacting with people within the universe and occasionally causing physical manifestations (miracles, apparitions, hauntings, UFOs, etc.). And perhaps the simulation hypothesis can even explain absence of evidence: the sky in a video game may contain a multitude of stars and galaxies, but that doesn't mean each is populated by its own video game universe filled with characters playing the same game. No, it's just scenery, there to be admired but with which you can't interact. Maybe that's why we've never detected signals from an alien civilisation: the stars are just procedurally generated scenery to make our telescopic views more interesting.

The author concludes with a summary of the evidence we may be living in a simulation and the objection of sceptics (such that a computer as large and complicated as the universe would be required to simulate a universe). He suggests experiments which might detect the granularity of the simulation and provide concrete evidence the universe is not the continuum most of science has assumed it to be. A final chapter presents speculations as to who might be running the simulation, what their motives might be for doing so, and the nature of beings within the simulation. I'm cautious of delusions of grandeur in making such guesses. I'll bet we're a science fair project, and I'll further bet that within a century we'll be creating a multitude of simulated universes for our own science fair projects.

Posted at 01:00 Permalink

Wednesday, January 1, 2020

Reading List: The City of Illusions

Wood, Fenton. The City of Illusions. Seattle: Amazon Digital Services, 2019. ASIN B082692JTX.
This is the fourth short novel/novella (148 pages) in the author's Yankee Republic series. I described the first, Pirates of the Electromagnetic Waves (May 2019), as “utterly charming”, and the second, Five Million Watts (June 2019), “enchanting”. The third, The Tower of the Bear (October 2019), takes Philo from the depths of the ocean to the Great Tree in the exotic West.

Here, the story continues as Philo reaches the Tree, meets its Guardian, “the largest, ugliest, and smelliest bear” he has ever seen, not to mention the most voluble and endowed with the wit of eternity, and explores the Tree, which holds gateways to other times and places, where Philo must confront a test which has defeated many heroes who have come this way before. Exploring the Tree, he learns of the distant past and future, of the Ancient Marauder and Viridios before the dawn of history, and of the War that changed the course of time.

Continuing his hero's quest, he ventures further westward along the Tyrant's Road into the desert of the Valley of Death. There he will learn the fate of the Tyrant and his enthralled followers and, if you haven't figured it out already, you will probably now understand where Philo's timeline diverged from our own. A hero must have a companion, and it is in the desert, after doing a good deed, that he meets his: a teddy bear, Made in Japan—but a very special teddy bear, as he will learn as the journey progresses.

Finally, he arrives at the Valley of the Angels, with pavement stretching to the horizon and cloaked in an acrid yellow mist that obscures visibility and irritates the eyes and throat. There he finds the legendary City of Illusions, where he is confronted by a series of diabolical abusement park attractions where his wit, courage, and Teddy's formidable powers will be tested to the utmost with death the price of failure. Victory can lead to the storied Bullet Train, the prize he needs to save radio station 2XG and possibly the world, and the next step in his quest.

As the fourth installment in what is projected to be one long story spanning five volumes, if you pick this up cold it will probably strike you as a bunch of disconnected adventures and puzzles each of which might as well be a stand-alone short-short story. As they unfold, only occasionally do you see a connection with the origins of the story or Philo's quest, although when they do appear (as in the linkage between the Library of Infinity and the Library of Ouroboros in The Tower of the Bear) they are a delight. It is only toward the end that you begin to see the threads converging toward what promises to be a stirring conclusion to a young adult classic enjoyable by all ages. I haven't read a work of science fiction so closely patterned on the hero's journey as described in Joseph Campbell's The Hero with a Thousand Faces since Rudy Rucker's 2004 novel Frek and the Elixir; this is not a criticism but a compliment—the eternal hero myth has always made for tales which not only entertain but endure.

This book is currently available only in a Kindle edition. The fifth and final volume of the Yankee Republic saga is scheduled to be published in the spring of 2020.

Posted at 20:59 Permalink