Nope. I don’t talk about myself like that.

  • 1 Post
  • 176 Comments
Joined 1 year ago
cake
Cake day: June 8th, 2023

help-circle
  • Nope. There is an industry standard way of measuring latency, and it’s measured at the halfway point of drawing the image.

    No. And if you want to actually provide a link to your “industry standard” feel free to, just make sure that your “standard” actually can be applied to a CRT first.

    You can literally focus the CRT to only show one pixel (more accurately beam width) worth of value. And that pixel would be updated many thousands of times a second (literally constant… since it’s analog).

    If you’re going to define latency as “drawing the image” (by any part of the metric) then a CRT can draw a single “pixel” worth of value 1000s of times a second… probably more. Where your standard 60hz panel can only do 1/60th a second… (or even the highest LCDs at 1/365).

    If there is a frame to draw and that frame is being processed, then yes. You’re right. Measuring at the middle will yield a delay. But this isn’t how all games/operations work for devices in all of history. There are many applications where data being sent to the display is literally read from memory nanoseconds prior. CRTs have NO processing delay that LCDs do have.

    Further points of failure in your post. CRTs are not all “NTSC” standard (Virtually every computer monitor for instance). There’s plenty of CRTs that can push much higher than the NTSC standard specifies.

    Here’s an example from a bog standard monitor I had a long time ago… https://www.manualslib.com/products/Sony-Trinitron-Cpd-E430-364427.html

    800 x 600/155 Hz
    1024 x 768/121 Hz
    1280 x 1024/91 Hz
    1600 x 1200/78 Hz

    So on a 60hz LCD will always be 0.016 to do the whole image. Regardless of it’s resolution being displayed. Not so on the CRT… Higher performance CRTs can draw more “pixels” per second. and when you lower the amount of lines you want it to display the full frame draw times go down substantially. There’s a lot of ways to define these things, that your simplistic view doesn’t account for. The reality is though, it’s possible if you skip the idea of a “frame” that the time from input to the time of display on the CRT monitor is lower simply because there’s no processing occurring here, your limit is physics of the materials you build the monitor out of. Not some chips capability to decode a frame. thus… No latency.

    Not frametime. Not FPS. Not Hz. Latency is NONE of those things, otherwise we wouldn’t have those other terms and would have strictly used “latency” instead.

    And a wonderful example of this is the commodor64 tape loading screens. https://youtube.com/watch?v=Swd2qFZz98U

    Those lines/colors are drawn straight from memory without the concept of a frame. There is no latency here. Many scene demos abused this function to achieve really wild affects as well. Your LCD cannot do that, those demos don’t function correctly on LCDs…

    Lightguns are a perfect example of how this can be leveraged (which is completely impossible on an LCD as well).

    Specifically scroll down to the Sega section. https://www.retrorgb.com/yes-you-can-use-lightguns-on-lcds-sometimes.html

    By timing the click of the lightgun input to which pixel is currently being drawn by the frame to take that as input for the gun. That requires minimal latency to do. LCDs cant do that.

    Ultimately people like you are trying to redefine what latency is that flies in the face of actual history that shows us there is a distinct difference that has historically mattered and even applications of that latency that CANNOT be what you’re claiming it to be.

    https://yt.saik0.com/watch?v=llGzvCaw62Y#player-container

    can you tell me why the LCD on the right is ALWAYS behind? And why it will ALWAYS be the case that it will not work, regardless of how fast the LCD panel is? The reason you’re going to come to is that it’s processing delay. Which didn’t exist on CRTs. That’s “LATENCY”.

    When talking about retro consoles, we’re limited by the hardware feeding the display, and the frame can’t start drawing until the console has transmitted everything.

    This is where you’re completely wrong. CRTs don’t know the concept of a frame. It draws the input that it gets. Period. There’s no buffer… there’s no where to hold onto anything that is being transmitted. It’s literally just spewing electrons at the phosphors.

    Edit: typo

    Edit2: to expound on the LCD vs CRT thing with light guns. CRTs drawn the “frame” as it’s received… so as it gets the voltage it varies that voltage on the electron gun itself, which means that when the Sega console in this case sets the video buffer to the white value for a coordinate and displays it, it knows exactly which pixel is currently being modified. The LCD will take the input, store it in a buffer until it gets the full frame. Then display. The Sega doesn’t know when that frame will actually be displayed as there’s other shit between it and the display mechanism doing stuff. There is an innate delay that MUST occur on the LCD that simply doesn’t on the CRT. That’s the latency.






  • Nah, that’d be mean. It isn’t “simple” by any stretch. It’s an aggregation of a lot of hours put into it. What’s fun is that when it gets that big you start putting tools together to do a lot of the work/diagnosing for you. A good chunk of those tools have made it into production for my companies too.

    LibreNMS to tell me what died when… Wazuh to monitor most of the security aspects of it all. I have a gitea instance with my own repos for scripts when it comes maintenance time. Centralized stuff and a cron stub on the containers/vms can mean you update all your stuff in one go


  • 40 ssds as my osds… 5 hosts… all nodes are all functions (monitor/manager/metadataservers), if I added more servers I would not add any more of those… (which I do have 3 more servers for “parts”/spares… but could turn them on too if I really wanted to.

    2x 40gbps networking for each server.

    Since upstream internet is only 8gbps I let some vms use that bandwidth too… but that doesn’t eat into enough to starve Ceph at all. There’s 2x1gbps for all the normal internet facing services (which also acts as an innate rate limiter for those services).





  • Fire extinguisher is in the garage… literal feet from the server. But that specific problem is actually being addressed soon. My dad is setting up his cluster and I fronted him about 1/2 the capacity I have. I intend to sync longterm/slow storage to his box (the truenas box is the proxmox backup server target, so also collects the backups and puts a copy offsite).

    Slow process… Working on it :) Still have to maintain my normal job after all.

    Edit: another possible mitigation I’ve seriously thought about for “fire” are things like these…

    https://hsewatch.com/automatic-fire-extinguisher/

    Or those types of modules that some 3d printer people use to automatically handle fires…


  • Absurdly safe.

    Proxmox cluster, HA active. Ceph for live data. Truenas for long term/slow data.

    About 600 pounds of batteries at the bottom of the rack to weather short power outages (up to 5 hours). 2 dedicated breakers on different phases of power.

    Dual/stacked switches with lacp’d connections that must be on both switches (one switch dies? Who cares). Dual firewalls with Carp ACTIVE/ACTIVE connection…

    Basically everything is as redundant as it can be aside from one power source into the house… and one internet connection into the house. My “single point of failures” are all outside of my hands… and are all mitigated/risk assessed down.

    I do not use cloud anything… to put even 1/10th of my shit onto the cloud it’s thousands a month.




  • The site is already available in HTTPS. Why would you even serve content non-encrypted?

    If you need an education on the matter… Here you go. https://www.cloudflare.com/learning/ssl/why-use-https/

    “I don’t handle sensitive information on my website so I don’t need HTTPS”

    A common reason websites don’t implement security is because they think it’s overkill for their purposes. After all, if you’re not dealing with sensitive data, who cares if someone is snooping? There are a few reasons that this is an overly simplistic view on web security. For example, some Internet service providers will actually inject advertising into HTTP-served websites. These ads may or may not be in line with the content of the website, and can potentially be offensive, aside from the fact that the website provider has no creative input or share of the revenue. These injected ads are no longer feasible once a site is secured.
    Modern web browsers now limit functionality for sites that are not secure. Important features that improve the quality of the website now require HTTPS. Geolocation, push notifications and the service workers needed to run progressive web applications (PWAs) all require heightened security. This makes sense; data such as a user’s location is sensitive and can be used for nefarious purposes.

    I don’t feel the need to be your teacher. You can easily google why you should always be using HTTPS. There’s numerous reason… all overwhelmingly obvious. Forget the basic “Not every ISP is an angel, and they all will collect as much information as they can get”. But I already said that… “It’s still best practice to limit sniffing.” Not sure why I need to elaborate any more on that. Very much akin to “why close your window blinds”, because nobody likes a peeping tom.

    Ultimately for this specific website it’s literally changing a couple lines of code in their apache or nginx instance (or whatever proxy they’re using). It’s called best practice for a reason.

    Edit: Hell it’s even a bit more of a guarantee that your site makes it to the consumer unaltered. Would be odd for that site to have it’s packets intercepted and midget porn be added to every page wouldn’t it? Think that would hurt the guys reputation?