Came to post the same. Seems like the most awkward possible way to phrase that.
Your “Disks not included” suggestion, or heck, just “empty” would surely be better.
Came to post the same. Seems like the most awkward possible way to phrase that.
Your “Disks not included” suggestion, or heck, just “empty” would surely be better.
Interestingly, looking at Gentoo’s package, they have both the github and tukaani.org URLs listed:
https://github.com/gentoo/gentoo/blob/master/app-arch/xz-utils/xz-utils-5.6.1.ebuild#L28
From what I understand, those wouldn’t be the same tarball, and might have thrown an error.
Sharp also make great commerical-grade printers that are 100% Linux compatible, we’re using these at work: http://global.sharp/products/copier/products/bp_70c65/index.html
They don’t really make anything small enough to be a “home” model, this looks like their smallest printer: https://global.sharp/products/copier/products/mx_c358f/index.html (and that’s around $1000, if you could even find someone to sell you one).
Fortunately, there’s an extension that solves that: https://microsoftedge.microsoft.com/addons/detail/ajgodcbbfnpdbopgmfcgdbfhabbnilbp
I’ve never seen them in a store here in New Zealand. I’ve been trying to grow them, but while the tree is doing well it is yet to produce fruit.
I did manage to buy some at a supermarket in Berlin a few years ago while on holiday, they were packed like cherry tomatoes in a clear plastic punnet.
The egg-shaped fruit you’ve got are frequently the “Meiwa” or “Nagami” cultivars, OP’s round fruit may be the “Marumi”.
I’d be curious to see how much cooling a SAS HBA would get in there. Looking at Broadcom’s 8 external port offerings, the 9300-8e reports 14.5W typical power consumption, 9400-8e 9.5W, and 9500-8e only 6.1W. If you were considering one of these, definitely seems it’d be worth dropping the money on the newest model of HBA.
I’m definitely curious, would only personally need it to be NAS + Plex server for which either of the CPUs they’re offering is a bit overkill, but it’s nice that it fits a decent amount of RAM, and you’re not forced to choose between adding storage or networking.
Single-sided drives can be up to 4TB though, no?
Or failing that, take your pick of
To expand on @doeknius_gloek’s comment, those categories usually directly correlate to a range of DWPD (endurance) figures. I’m most familiar with buying servers from Dell, but other brands are pretty similar.
Usually, the split is something like this:
(Consumer SSDs frequently have endurances only in the 0.1 - 0.3 DWPD range for comparison, and I’ve seen as low as 0.05)
You’ll also find these tiers roughly line up with the SSDs that expose different capacities while having the same amount of flash inside; where a consumer drive would be 512GB, an enterprise RI would be 480GB, and a MU/WI only 400GB. Similarly 1TB/960GB/800GB, 2TB/1.92TB/1.6TB, etc.
If you only get a TBW figure, just divide by the capacity and the length of the warranty. For instance a 1.92TB 1DWPD with 5y warranty might list 3.5PBW.
This video about ex-Soviet RTGs of questionable radioactive source choice is quite a good watch
https://www.youtube.com/watch?v=NT8-b5YEyjo
NASA apparently used RTGs for deep space missions only, while in the same timeframe the Soviets scattered them all across the countryside, then promptly forgot about them.
Yeah, it’s joined Facebook and Twitter on that “do not click” list for me.
You’d think that quitting cold turkey would have been hard, but it somehow just hasn’t been.
But also
mysterytool --help
mysterytool: unrecognized option: '-'
ok then…
mysterytool -h
mysterytool: unrecognized option: 'h'
I thought it might be sensible on Linux to use MS Edge for Teams (the PWA version).
Nope, it's just as shit in Microsoft's own browser. There is apparently no saving it.
Yep.
Definitely no injuries sustained in his life. Just a regular 'ol weirdo.
and the people who do still download, wouldn’t care about doing it while on battery
Very much this; I’ve got a whole army of machines I can SSH into to launch a long-running download, which frequently additionaly cuts out a 2nd step of copying the file to where it needs to be after downloading it (a action which would normally cause additional battery usage on the laptop).
And I thoroughly agree with you; I want the laptop to go to S3 sleep immediately when I shut the lid, and then pull it out of my bag a hours later with only a couple of percent of the battery consumed in the interim.
I’ve definitely forgotten to close mine once or twice, even though my custom (LoRa) integration is just simulating pushing the button on the wall by closing a relay contact and watching for closed status with a reed switch, it means I can do it from anywhere.
It’s a shame that even “cheap” versions are hundreds of dollars, because the perfect absolute position sensor would be a “draw wire displacement sensor” (goes by a few variations on that name).
Basically a spring-loaded spool of wire with a multi-turn position sensor, rolls in and out like a tape measure.
Worse still, a lot of “modern” designs don’t even both including that trivial amount of content in the page, so if you’ve got a bad connection you get a page with some of the style and layout loaded, but nothing actually in it.
I’m not really sure how we arrived at this point, it seems like use of lazy-loading universally makes things worse, but it’s becoming more and more common.
I’ve always vaguely assumed it’s just a symptom of people having never tested in anything but their “perfect” local development environment; no low-throughput or high-latency connections, no packet loss, no nothing. When you’re out here in the real world, on a marginal 4G connection - or frankly even just connecting to a server in another country - things get pretty grim.
Somewhere along the way, it feels like someone just decided that pages often not loading at all was more acceptable than looking at a loading progress bar for even a second or two longer (but being largely guaranteed to have the whole page once you get there).
Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.
We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.
Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.
The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.
(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)
As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.
Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.