Software Engineering. Most software is basically just houses of cards, developed quickly and not maintained properly (to save money ofc). We will see some serious software collapses within our lifetime.
Then you start off fresh going “this time it’s going to be different” but the same fucking things happen and you end up cramming that project in 3 weeks.
(Y2K, i.e. the “year 2000 problem”, affected two digit date formats. Nothing bad happened, but consensus nowadays is that that wasn’t because the issue was overblown, it’s because the issue was recognized and seriously addressed. Lots of already retired or soon retiring programmers came back to fix stuff in ancient software and made bank. In 2038, another very common date format will break. I’d say it’s much more common than 2 digit dates, but 2 digit dates may have been more common in 1985. It’s going to require a massive remediation effort and I hope AI-assisted static analysis will be viable enough to help us by then.)
My dad is a tech in the telecommunications industry. We basically didn’t see him for all of 1999. The fact that nothing happened is because of people working their assess off.
Maybe a documentary from some folks who worked on that stuff? I imagine a short documentary exists on YouTube, or at least an interview on a podcast from someone who did it.
If he won’t believe it then, not sure what else you can do. Some people are just stuck in their old ways and beliefs despite any evidence you provide.
Even more difficult in a situation like this because it wasn’t widely publicized until years after. I didn’t even know this stuff until a few years ago, but I work with computers so I believe it partly because of what I know about computer architecture.
Maybe he’ll believe it if he understands how 2038 affects Linux OS and can see it in real time then?
I get the joke, but for those seriously wondering:
The epoch is Jan 1, 1970. Time uses a signed integer, so you can express up to 2^31 seconds with 32 bits or 2^63 with 64 bits.
A normal year has exactly 31536000 seconds (even if it is a leap second year, as those are ignored for Unix time). 97 out of 400 years are leap years, adding an average of 0.2425 days or 20952 seconds per year, for an average of 31556952 seconds.
That gives slightly over 68 years for 32 bit time, putting us at 1970+68 = 2038. For 64 bit time, it’s 292,277,024,627 years. However, some 64 bit time formats use milliseconds, microseconds, 100 nanosecond units, or nanoseconds, giving us “only” about 292 million years, 292,277 years, 29,228 years, or 292 years. Assuming they use the same epoch, nano-time 64 bit time values will become a problem some time in 2262. Even if they use 1900, an end date in 2192 makes them a bad retirement plan for anyone currently alive.
Most importantly though, these representations are reasonably rare, so I’d expect this to be a much smaller issue, even if we haven’t managed to replace ourselves by AI by then.
If you’re going to correct people about Dune quotes, at least use one from the book! “The spice must flow” doesn’t appear in any of them, it’s a Lynch addition.
How much software is still running 32 bit binaries that won’t be recompiled because the source code has been lost together with the build instructions, the compiler, and the guy who knew how it worked?
How much software is using int32 instead of time_t, then casting/converting in various creative ways?
How many protocols, serialization formats and structs have 32 bit fields?
The most common date format used internally is “seconds since January 1st, 1970”.
In early 2038, the number of seconds will reach 2^31 which is the biggest number that fits in a certain (also very common) data type. Numbers bigger than that will be interpreted as negative, so instead of January 2038 it will be in December 1901 or so.
Why not just use unsigned int rather than signed int? We rarely have to store times before 1970 in computers and when we do we can just use a different format.
Because that’s how it was initially defined. I’m sure plenty of places use unsigned, which means it might either work correctly for another 68 years… or break because it gets converted to a 32 bit signed somewhere.
Is a website running on WordPress? That’s a system built on failed practices and is constantly attacked. It needs a serious overhauling and possibly replacement, but the software runs a huge majority of websites.
While most instances of WordPress you we’ll find in the wild are insecure and nothing more than bloated garbage. The CMS is actually fairly secure with minimal intervention if you properly configure it on setup and maintain software updates as they continually roll out patches for vulnerabilities as they are discovered.
If you turn off comments and the ability for new users to self-register and throw it on PHP 8.2 with a WAF and enable file write protection it’s actually very robust.
At least when WordPress breaks you have WP-CLI to troubleshoot it
I work for a web hosting company. So many WP sites are out of date with plugins and core. I’ve dealt with many compromised sites. Granted there are auto updates on the WP side and the hosts service, it’s still pretty often.
I also work for a WH. Yeah most idiots don’t do basic maintenance which is why I just rename the dir as xxx.old make a new folder install core and then delete the blank wp-content an copy over the wp-content DB and wp-config.php from the borked install. Takes 10 min rather than 30 to update and fix. I call that the “Doctor Frankenstein” method
As a tech person outside Twitter, looking in: Twitter is metaphorically a huge airliner with one remaining engine, and that engine is pouring smoke.
The clown who caused the first four engines to fail has stepped out of the pilot’s seat, but still has the ability to fire the new pilot, and still has strong convictions on how to fly a plane.
That plane might land safely. But in the tech community, those of us fortunate not to be affected are watching with popcorn, because we expect a spectacular crash.
If anyone reading this is still relying on Twitter - uh, my advice is to start a Mastodon account. Or Myspace or something.
I can’t imagine the shit show it would be if that log4j vulnerability and software update hit Twitter in its current state. I could see shutting off all external web traffic until the overworked devs finish committing while being held up with a visa loaded gun pointed at their head.
Because it fit into an ecosystem of tech that is constantly evolving. Software as a whole evolves more quickly than most tech. You see the same effect in every other branch of engineering but just slower.
Example:
They are having problems rebuilding a certain famous church in Europe that burned down because the trees that went into it are now all smaller. They can’t get a replacement part.
I just dealt with this about a month ago at work. A customer machine died and they wanted “an exact replacement”. I explained to sales that is all I need to hear to know this project is going to be a disaster. Parts go out of stock, the network stuff is not as backwards compatible as people think it is, and standards change. They went over my head and demanded the same machine. I get daily emails from our fabricators about the problems they are having. Engineering is not a once and done thing. You need to have the staff and resources to continue to make your product match up with the environment it is in.
Package management is impossible. When a big enough package pushes an update the house of cards eill fall. This causes project packages with greatly outdated versions to exist in production because there is no budget to diagnose and replace packages that are no longer available when a dependency requires a change.
Examples: adminJs or admin bro… one of them. Switched the package used to render rich text fields.
React-scripts or is it create react app, I don’t recall. Back end packages no long work as is on the front end. Or something like that? On huge projects, who’s got the budget to address this to get the project up to date?
This has to be a world wide thing. There is way to many moving targets for every company to have all packages up to date.
It’s only a matter of time before an exploit of some sort is found and who knows what happens from there.
Software Engineering. Most software is basically just houses of cards, developed quickly and not maintained properly (to save money ofc). We will see some serious software collapses within our lifetime.
We pretty much read about them at least once a week.
Then you start off fresh going “this time it’s going to be different” but the same fucking things happen and you end up cramming that project in 3 weeks.
In the news this week: https://publicapps.caa.co.uk/docs/33/NERL Major Incident Investigation Preliminary Report.pdf
This is unprecedented since, well, January: https://en.wikipedia.org/wiki/2023_FAA_system_outage
Y2038 is my “retirement plan”.
(Y2K, i.e. the “year 2000 problem”, affected two digit date formats. Nothing bad happened, but consensus nowadays is that that wasn’t because the issue was overblown, it’s because the issue was recognized and seriously addressed. Lots of already retired or soon retiring programmers came back to fix stuff in ancient software and made bank. In 2038, another very common date format will break. I’d say it’s much more common than 2 digit dates, but 2 digit dates may have been more common in 1985. It’s going to require a massive remediation effort and I hope AI-assisted static analysis will be viable enough to help us by then.)
My dad is a tech in the telecommunications industry. We basically didn’t see him for all of 1999. The fact that nothing happened is because of people working their assess off.
My dad had to stay in his office with a satellite phone over new years in case shit hit the fan.
My dad still believes the entire Y2K problem was a scam. How do I convince him?
Well my dad does too and he worked his ass off to prevent it. Baby boomers are just stupid as shit, there’s not really much you can do.
Maybe a documentary from some folks who worked on that stuff? I imagine a short documentary exists on YouTube, or at least an interview on a podcast from someone who did it.
If he won’t believe it then, not sure what else you can do. Some people are just stuck in their old ways and beliefs despite any evidence you provide.
Even more difficult in a situation like this because it wasn’t widely publicized until years after. I didn’t even know this stuff until a few years ago, but I work with computers so I believe it partly because of what I know about computer architecture.
Maybe he’ll believe it if he understands how 2038 affects Linux OS and can see it in real time then?
Windows, Linux, FreeBSD, OpenBSD, NetBSD, and OSX have all already switched to 64 bit time.
Tell that to the custom binary serialization formats that all the applications are using.
Edit: and the long-calcified protocols that embed it.
So they have a year 202020 bug then
I get the joke, but for those seriously wondering:
The epoch is Jan 1, 1970. Time uses a signed integer, so you can express up to 2^31 seconds with 32 bits or 2^63 with 64 bits.
A normal year has exactly 31536000 seconds (even if it is a leap second year, as those are ignored for Unix time). 97 out of 400 years are leap years, adding an average of 0.2425 days or 20952 seconds per year, for an average of 31556952 seconds.
That gives slightly over 68 years for 32 bit time, putting us at 1970+68 = 2038. For 64 bit time, it’s 292,277,024,627 years. However, some 64 bit time formats use milliseconds, microseconds, 100 nanosecond units, or nanoseconds, giving us “only” about 292 million years, 292,277 years, 29,228 years, or 292 years. Assuming they use the same epoch, nano-time 64 bit time values will become a problem some time in 2262. Even if they use 1900, an end date in 2192 makes them a bad retirement plan for anyone currently alive.
Most importantly though, these representations are reasonably rare, so I’d expect this to be a much smaller issue, even if we haven’t managed to replace ourselves by AI by then.
I can’t wait to retire when I’m 208 years old.
Omg we are in same epoch as the butlarian crusade.
Butlerian Jihad, my dude. Hate to correct you, but the spice must flow.
Im just glad you got that reference
If you’re going to correct people about Dune quotes, at least use one from the book! “The spice must flow” doesn’t appear in any of them, it’s a Lynch addition.
Yes but it’s the most accessible dune quote.
Cars haven’t. A whole lot of cars are gonna get bricked.
How many UNIX machines in production are still running on machines with 32-bit words, or using a 32-bit time_t?
How much software is still running 32 bit binaries that won’t be recompiled because the source code has been lost together with the build instructions, the compiler, and the guy who knew how it worked?
How much software is using int32 instead of time_t, then casting/converting in various creative ways?
How many protocols, serialization formats and structs have 32 bit fields?
Irrelevant. The question you should ask instead is: how many of those things will still be in use in 15 years.
What is the basis for the 2038 problem?
The most common date format used internally is “seconds since January 1st, 1970”.
In early 2038, the number of seconds will reach 2^31 which is the biggest number that fits in a certain (also very common) data type. Numbers bigger than that will be interpreted as negative, so instead of January 2038 it will be in December 1901 or so.
Huh interesting. Why 2^31? I thought it was done in things like 2^32. We could have pushed this to 2106.
Signed integers. The number indeed goes to 2^32 but the second half is reserved for negative numbers.
With 8 bit numbers for simplicity:
0 means 0.
127 means 127 (last number before 2^(7)).
128 means -128.
255 means -1.
Why not just use unsigned int rather than signed int? We rarely have to store times before 1970 in computers and when we do we can just use a different format.
Because that’s how it was initially defined. I’m sure plenty of places use unsigned, which means it might either work correctly for another 68 years… or break because it gets converted to a 32 bit signed somewhere.
Maybe this is just a big elaborate time travel experiment 68 years in the making?
I am taking the week off, family camping, and cell phones off for that week in 2038.
Are there currently any that are showing signs of imminent collapse? (Twitter, maybe?).
Or what are the signs to look for those who are untrained in this field?
Is a website running on WordPress? That’s a system built on failed practices and is constantly attacked. It needs a serious overhauling and possibly replacement, but the software runs a huge majority of websites.
While most instances of WordPress you we’ll find in the wild are insecure and nothing more than bloated garbage. The CMS is actually fairly secure with minimal intervention if you properly configure it on setup and maintain software updates as they continually roll out patches for vulnerabilities as they are discovered.
If you turn off comments and the ability for new users to self-register and throw it on PHP 8.2 with a WAF and enable file write protection it’s actually very robust.
At least when WordPress breaks you have WP-CLI to troubleshoot it
I work for a web hosting company. So many WP sites are out of date with plugins and core. I’ve dealt with many compromised sites. Granted there are auto updates on the WP side and the hosts service, it’s still pretty often.
I also work for a WH. Yeah most idiots don’t do basic maintenance which is why I just rename the dir as xxx.old make a new folder install core and then delete the blank wp-content an copy over the wp-content DB and wp-config.php from the borked install. Takes 10 min rather than 30 to update and fix. I call that the “Doctor Frankenstein” method
Regarding Twitter: yes.
As a tech person outside Twitter, looking in: Twitter is metaphorically a huge airliner with one remaining engine, and that engine is pouring smoke.
The clown who caused the first four engines to fail has stepped out of the pilot’s seat, but still has the ability to fire the new pilot, and still has strong convictions on how to fly a plane.
That plane might land safely. But in the tech community, those of us fortunate not to be affected are watching with popcorn, because we expect a spectacular crash.
If anyone reading this is still relying on Twitter - uh, my advice is to start a Mastodon account. Or Myspace or something.
I can’t imagine the shit show it would be if that log4j vulnerability and software update hit Twitter in its current state. I could see shutting off all external web traffic until the overworked devs finish committing while being held up with a visa loaded gun pointed at their head.
Mostly tge first sign is something like all old .doc files can no longer be opened. So some thing like.
As an everyday user of software who’s not a developer, this is not a secret. Nothing works well for any extended period of time.
Because it fit into an ecosystem of tech that is constantly evolving. Software as a whole evolves more quickly than most tech. You see the same effect in every other branch of engineering but just slower.
Example: They are having problems rebuilding a certain famous church in Europe that burned down because the trees that went into it are now all smaller. They can’t get a replacement part.
I just dealt with this about a month ago at work. A customer machine died and they wanted “an exact replacement”. I explained to sales that is all I need to hear to know this project is going to be a disaster. Parts go out of stock, the network stuff is not as backwards compatible as people think it is, and standards change. They went over my head and demanded the same machine. I get daily emails from our fabricators about the problems they are having. Engineering is not a once and done thing. You need to have the staff and resources to continue to make your product match up with the environment it is in.
Package management is impossible. When a big enough package pushes an update the house of cards eill fall. This causes project packages with greatly outdated versions to exist in production because there is no budget to diagnose and replace packages that are no longer available when a dependency requires a change.
Examples: adminJs or admin bro… one of them. Switched the package used to render rich text fields.
React-scripts or is it create react app, I don’t recall. Back end packages no long work as is on the front end. Or something like that? On huge projects, who’s got the budget to address this to get the project up to date?
This has to be a world wide thing. There is way to many moving targets for every company to have all packages up to date.
It’s only a matter of time before an exploit of some sort is found and who knows what happens from there.
That’s basically what happened with log4j or whatever that java bug was a few years ago. A lot of things still haven’t been patched.
Does leftpad count as a collapse?