• 0 Posts
  • 359 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • Sure we can make a different ticket for that to move this along, but we’re getting product to agree first.

    Ooof, I’m glad I never worked there.

    QA’s job should be to make sure the bugs are known/documented/prioritised. It should never be a roadblock that interrupts work while several departments argue over what to do with a ticket.

    Seriously who cares if the current ticket is left open with a “still need to do XYZ” or it gets closed and a new one is open “need to do XYZ”. Both are perfectly valid, do whichever one you think is appropriate and don’t waste anyone’s time discussing it.


  • what else do I get with something like CARROT that the default doesn’t offer

    More control over what data is highlighted as the primary metrics at the top of the report (or on widgets).

    Where I live the actual temperature and “feels like” temperature are often really far apart. Apps like Carrot can be configured to show “feels like” as the main temperature, but Apple only shows it if you scroll down all the way down past a bunch nearly useless stats like the sunset time (spoiler, it will be the same as yesterday) and how the current temperature compares to the historical average.

    Also, I live near the beach and want to know the tides. That’s almost more important than the temperature.


  • If any of that is part of the hiring process - I don’t want the job.

    If HR is incompetent enough to consider things like relationship status or political opinions then what other bullshit policies does the company have? It’s probably the tip of the iceberg.

    By far most important thing is to have good colleagues, because without good colleagues your job will be miserable or the company will not last (or both). Made the mistake of working for a shitty job at high pay once and it was one of the worst decisions of my life.

    Don’t waste your life working for incompetent companies.

    Also, as someone who has hired devs… if you have a public profile, and it doesn’t make you look hopelessly incompetent, then your application is going onto my shortlist. Too many applications cross my desk to look at all of them properly, so a lot of good candidates won’t even get considered. But if there’s a GitHub or similar profile, I’m going to open it, and if I see green squares… you’ve got my attention.

    You’ll get my attention wether the username matches your real name or not, but bonus points if it’s your real name. Openness leads to trust. And trust is criitcal.


  • it’s all CGI

    Crushing the industry I work in, and my dad worked in, is CGI? I’m pretty sure that’s very real.

    I love listening to digital music on as much as anyone. More than most people. But it will never replace physical instruments for me and I don’t like to see a company celebrating that transition - even if I admit it’s very much real.

    I think the world was a better place when all 50 people on a train carriage listened to the one musician who brought a guitar onto the train and called out asking them to sing a favourite song next.


  • Some of us don’t like watching beloved musical instruments destroyed. We also don’t like how so many people think watching TikTok on an iPad is “music”.

    When my father died, my sister didn’t give a shit about the house. She just wanted the guitar - which our father (a drummer) inherited when the lead guitarist in his band died. The guitarist had two dozen guitars but was his favourite.

    It’s close to a century old, nobody knows what trade secrets the luthier who created it used to get that sound, and no other instrument sounds the same. It’s been used on stage in countless live performances on every continent in the world and has been used to record over a hundred songs in professional recording studios. It was used to play music at the funeral of both the previous owners and it’s literally impossible to replace.

    I get it, not every instrument is that special… but this instrument wasn’t that special either when the first guitarist ever picked it up. Nearly all instruments have the potential to become that special… and Apple created a video dedicated to destroying a bunch of them while also implying that listening to an MP3 is as good as an actual instrument. No way.


  • Yeah I’ve given up on integration tests.

    We have a just do “smoke testing” — essentially a documented list of steps that a human follows after every major deployment. And we have various monitoring tools that do a reasonably good job detecting and reporting problems (for example, calculating how much money to charge a customer is calculated twice by separate systems, and if they disagree… an alert is triggered and a human will investigate. And if sales are lower than expected, that will be investigated too).

    Having said that, you can drastically reduce the bug surface area and reduce how often you need to do smoke tests by moving as much as possible out of the user interface layer into a functional layer that closely matches the user interface. For example if a credit card is going to be charged, the user interface is just “invoice number, amount, card detail fields, submit, cancel”. And all the submit button does is read every field (including invoice number/amount) and send it to an API endpoint.

    From there all of the possible code paths are covered by unit tests. And unit tests work really well if your code follows industry best practices (avoid side effects, have a good dependency injection system, etc).

    I generally don’t bother with smoke testing if nothing that remotely affects the UX has changed… and I keep the UX as a separate project so I can be confident the UX hasn’t changed. That code might go a year without a single code commit even on a project with a full time team of developers. Users also appreciate it when you don’t force them to learn how their app works every few months.


  • Um - Apple’s problems are very public.

    It was clear they had supply constraints a few years ago, and when those cleared up there was a huge bubble of sales. Expecting growth this year when so many regular customers just got a new phone would be silly.

    And it’s also a distraction - the problems facing Apple are

    1. How poorly the company is responding to antitrust complaints.
    2. The Vision Pro doesn’t seem to be doing well, and their car project was so much worse they literally killed it.
    3. Twelve years ago Apple was leading the industry on digital assistants… Siri was nowhere near good enough but nobody else had a “good enough” product either and Siri showed real promise. Now? WTF is taking so long? It’s pretty clear other companies are very close to achieving what Siri failed and there’s not much to indicate Apple can keep up.

  • The long-term popularity of any given tool for software development is proportional to how much labour arbitrage it enables.

    Right. Because if you quote $700,000 to do a job in C/C++, and someone else quotes $70,000 to do the same job in JavaScript… no prizes for correctly guessing who wins the contract.

    But that’s not the whole story. Where C really falls to shit is if you compare it to giving the JavaScript project $500,000. At that point, it’s still far cheaper than C, but you can hire a 7x larger team. Hire twice as many coders and also give them a whole bunch of support staff (planning, quality assurance, user experience design, a healthy marketing budget…)

    JavaScript is absolutely a worse language than C/C++. But if you compare Visual Studio to Visual Studio Code (with a bunch of good plugins)… then there’s no comparison VSCode is a far better IDE. And Visual Studio has been under active development since the mid 90’s. VSCode has existed less than half that long and it has already eclipsed it, despite being backed by the same company, and despite that company being pretty heavily incentivised to prioritised the older product (which they sell for a handsome profit margin, while the upstart is given away for free).

    I learned C 23 years ago and learned JavaScript 18 years ago. In my entire life, I’ve written maybe 20,000 lines of C code where I was actually paid to write that code and I couldn’t possibly estimate the number of lines of JavaScript. It’d be millions.

    I hate JavaScript. But it puts food on the table, so turn to it regularly.

    Large Language Models are a remarkable discovery that should, in the long term, tell us something interesting about the nature of text. They have some potentially productive uses. It’s destructive uses and the harm it represents, however, outweigh that usefulness by such a margin that, yes, I absolutely do think less of you for using them. (You can argue about productivity and “progress” all you like, but none of that will raise you back into my good opinion.)

    Yeah you’re way off the mark. Earlier today I added this comment to my code:

    // remove categories that have no sales

    For context… above that comment was a fifty lines of relatively complex code to extract a month of sales data from several database tables, and summarise it down to a simple set of figures which can be used to generate a PDF report for archival/potential future auditing purposes. Boring business stuff that I’d rather not work on, but it has to be done.

    The database has a bunch of categories which aren’t in use currently (e.g. seasonal products) and I’d been asked to remove them. I copy/pasted that comment from my issue tracker into the relevant function, hit enter, and got six lines of code. A simple map reduce function that I could’ve easily written in two minutes. The AI wrote it in a quarter of a second, and I spent one minute checking if it worked properly.

    That’s not a “potential” productivity boost, it’s a big one. Does that make me worse at my job? No - the opposite. I’m able to focus all of my attention on the advanced features of my project that separate it from the competition, without getting distracted much by all the boring shit that also has to be done.

    I’ve seen zero evidence of LLM authored code being destructive. Sure, it writes buggy code sometimes… but humans do that too. And anyone with experience in the industry knows it’s easier to test code you didn’t write… well guess what, these days I don’t write a lot of my code. So I’m better equiped to catch the bugs in it.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    2 months ago

    PowerShell is heads and shoulders over bash

    Sure… but that’s a low bar. Bash is basically the worst shell of them all (if you exclude the ones that are so bad nobody uses them).

    I’m a fan of fish personally. It’s both concise and feature rich. The fish language isn’t something I’d want to write a complex shell script in, but it’s a thousand times better than bash and if you’re writing something complex then I’d argue any scripting language is the wrong tool for the job. Including Power Shell. You should be using a proper programming language (such as C#).

    PowerShell is innovative, for sure. But string output/input shells scripting wasn’t broken (unless you use bash) and I’m convinced trying to fix it by replacing simple string input/output with objects was a mistake.

    I love OOP. However I also love using existing tools that work well, and none of those tools are designed to be OOP.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 months ago

    Swift is a relatively young language (ten years) and ~3 years ago a lot of things were brand new or had relatively recently been refactored. Also Apple released it to the public before it was really finished - and did the final work finishing off the language out in the open with collaboration as any well run open source project should be developed.

    For the first five years or so it was barely ready for production use in my opinion, and it sounds like tried to use it when it was ready but still had a bit of rough edges such as docuemntation.

    For example async/await was added around when you looked into it and that obviously made a lot of documentation relating to threads out of date (from memory they pretty much deleted all of it and started the long process of rewriting it, beginning with very basic docs to get broad coverage before having detailed coverage).

    And before that the unicode integration wasn’t quite right yet — they did the tough work to make this work (most of these characters are multiple bytes long):

    let greeting = "Hello, 🌍! こんにちは! 🇮🇳"
    
    for character in greeting {
        print(character)
    }
    

    And this evaluates to true even though the raw data of the two strings are totally different:

    let eAcute: Character = "\u{E9}" // é
    let combinedEAcute: Character = "\u{65}\u{301}" // e followed by  ́
    
    if eAcute == combinedEAcute {
        print("Both characters are considered equal.")
    }
    

    I hated how strings were handled in the original Swift, but it’s great now. Really hard to write documentation when fundamentals like that are changing. These days though, swift is very stable and all the recent changes have been new features that don’t affect existing code (or already written documentation). For example in 2023 they added Macros - essentially an inline function (your function won’t be compiled as an actual function - the macro code will be embedded in place where it’s “called”).

    You define a Macro like this:

    macro func assertNotNil(_ value: Any?, message: String) {
        if value == nil {
            print("Assertion failed: \(message)")
        }
    }
    

    And then if you write this code:

    assertNotNil(optionalValue, message: "optionalValue should not be nil")
    

    … it will compile as this:

    if optionalValue == nil {
        print("Assertion failed: optionalValue should not be nil")
    }
    

    Notice it doesn’t even do any string interpolation at runtime. All of that happens at compile time! Cool stuff although it’s definitely a tool you can shoot yourself in the foot with.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 months ago

    is [Swift] using fibers instead of real OS threads(?)

    There are similarities to fibres, I’d say they’re solving the same problem, but they’re not really the same thing.

    It uses libdispatch which is built on top of “real OS threads” though on some kernels (especially BSD ones) it takes advantage of non-POSIX standard OS threading features (it can run without them, but runs better with them).

    Essentially you give the dispatcher a set of tasks, and the dispatcher will execute those tasks on as many threads as makes sense for the available hardware (this is where kernel integration helps, because the number of available CPU cores depends on the current workload for all processes, not just your process, and it changes from one millisecond to the next). The general goal is to keep every CPU core fully loaded, unless QoS has been set low which would prioritise battery life (and might, for example, use “efficiency cores” or avoid “turbo boost”).

    What are these things/what is this comparing?

    It’s a simple variable assignment, such as person.name = "bob", where the compiler has recognised the value might be accessed concurrently from multiple threads and may need to pause execution for a task that is waiting for the lock to be released. But of course, since the goal is to keep the CPU core active, it doesn’t just pause the code running on that core - it’s likely to start executing something else on the same core while waiting.

    OperationQueue and DispatchQueue are basically the same, but OperationQueue has additional scheduling / dependency features which come with a slight performance overhead. If your execution tasks are slow and have minimal interaction with other threads, such as resizing half a million images, then you wouldn’t notice the performance overhead. But if you have very short operations that interact with each other (or if you just don’t need any of those management features) then DispatchQueue is the way to go.

    They’re largely the same in basic use:

    dispatchQueue.async {
       // your code here
    }
    
    operationQueue.addOperation {
     // your code here
    }
    

  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    23
    arrow-down
    5
    ·
    edit-2
    2 months ago

    What? That’s not what happened at all.

    JavaScript was built entirely by Netscape from the ground up with no external involvement, and that was totally their “initial plan”. It shipped in testing (under the name “LiveScript”) within months of Netscape 1.0.

    And then Sun Microsystems paid Netscape a lot of money to rename it to JavaScript and pretend it was related to Java, even though it had nothing at all to do with Java.


  • abhibeckert@lemmy.worldtoProgramming@programming.dev...
    link
    fedilink
    arrow-up
    21
    arrow-down
    3
    ·
    edit-2
    2 months ago

    Swift. It’s an amazing language with clear but concise syntax that has every language feature you could ever want including some really nice ones that are quite rare (not unique, but certainly rare especially all in one place).

    The most unique and meaningful features are the memory manager and thread manager.

    Write a tight loop in Swift vs almost any other language, there’s a good chance Swift will use orders of magnitude less memory - because it’s better at figuring out when you’re done with a variable and don’t need that five megapixel image anymore. And it’s fast too — memory management isn’t something we rarely need to worry about in most languages but in Swift it’s more like “almost never” instead of just “rarely”.

    The threading system is so efficient that calling something on another thread is almost as fast as an ordinary function call, and the code to do it is also almost as clean as a function call, as well as all the tools you need to allow efficient (and reliable) data transfer between threads. The few mistakes you can make are usually caught by the compiler. Swift programmers use threads all the time even if they don’t really need to. It’s nothing like other languages where threads introduce unnecessary complexity/bugs or performance bottlenecks.

    Seriously look at this comparison of DispatchQueue and OperationQueue of a thread locked operation (setting a shared variable). A million operations in a bit over zero seconds vs nearly 30 seconds and the kicker is the 30 second one isn’t “slow” — a lot of thread safety management tools would be minutes if you tried to do that (these two comparisons are both done in Swift, which has several different ways you can deal with thread safety):

    Swift is fully open source and cross platform. But unfortunately it hasn’t taken off outside of Apple platforms, and therefore it lacks comprehensive libraries for a lot of commonplace tasks particularly when it comes to server side development or interacting with the Window Manager on Windows/Linux/Android.


  • Phones are remarkably durable now

    Even if you look after them perfectly the oleophobic coating starts to wear off with normal use after about six months.

    iPhone 15 Pro Max is about 5 or 6 on a mohs scale. Beach sand is much harder than that and often very sharp, so if you ever go to the beach (I’m lucky enough to do that almost every day), you’re probably going to have sand in your pocket/bag and that will scratch the screen. My current screen protector is about 12 months old and has tens of thousands of micro scratches (only visible in certain lighting) and five or six deep gouges (visible whenever the screen is off). Time to replace the protector.


  • They all take away from the display. Even if nothing else, the simple fact you have one more layer is adding a surface for light to bounce off. Glass is not transparent. Something like 16% of light bounces off most glass, depending on the chemical makeup and also how the layer is bonded to the display (there are ways to bond it that reduce the issue and Apple uses those for all the internal layers on the display).

    Having said that — I still think screen protectors are worth it, because i use my phone for a long time and am pretty rough with it, so my glass is going to get scratched and the ability to replace the outer layer periodically (and cheaply) is worth the tradeoff to me.

    The best ones don’t cover the screen - they cover pretty much the entire front of the device. Unfortunately with some iPhones those are not available (e.g. if the phone has curved edges). And they have an oleophobic coating — because holy shit fingerprints are bad without that. This is another reason to use a screen protector. That coating wears off over time and eventually your factory iPhone screen protector will be covered in smudges (unless you buy a new one regularly).

    You also need one with very good installation instructions and maybe also some kind of alignment jig built into the packaging. Wether or not that’s necessary depends on what model phone you have, since some iPhones are more difficult than others.

    I’m a big fan of D-Brand screen protectors — they’re well made and the installation instructions are the best. Even if you buy someone else’s protector it’s worth watching the D-Brand instructions for tips. This one doesn’t have an alignment jig, because one isn’t really needed. For some phones or protectors it really is needed. https://youtu.be/OJW89JK3zZk



  • For anyone who wants to go down that path — I recommend reading (or listening to) “Build” Tony Fadell:

    https://www.buildc.com/the-book

    Fadell worked in the R&D department of various companies you may have heard of such as Apple, Google, Sony, Philips, Motorola, Toshiba… as well as others you likely haven’t heard of (because they were moonshots that explode mid flight - one tried to invent an iPhone in 1990 when touch screens were shit, networking was wired, and batteries were disposable. Talk about ahead of it’s time). That failure wasn’t wasted - a lot of the mistakes they made were avoided years later when he was leading one of the teams that invented the iPhone (not from scratch, but by learning from past work).

    He’s now basically retired and spends his time advising the next generation, but that book covers all the basics. It really is a gift to the billions of people who can’t ask an experienced product developer for advice. And if you are lucky enough to be able to ask, the book will give you enough knowledge to avoid asking stupid questions.


    1. Write down who your customers are.
    2. Write down what problem your customers have, which can be solved with your product.
    3. Write down how your product can solves the problem.
    4. Figure out how you can achieve that goal (this needs to be separate to step 3 - you’re essentially tackling the same thing from a different perspective… helping you see things that might not be visible from the other one).
    5. Anything that does not bring your product closer to the goal(s), remove it from your product.
    6. Anything that would bring you closer, but isn’t achievable (not enough resources, going to take too long, etc), remove those as well.

    Those are not singular items. You have multiple customers (hopefully thousands). Your customers have multiple problems your product can solve. Actually solving all of those problems will require multiple things.

    If that list sounds functional, that’s because good design is functional. Aesthetics matter. Why do you choose a black shirt over an orange shirt with rainbow unicorns? Take the same approach for the colours on your app icon. Why do you choose jeans that are a specific length? Take that same approach for deciding how many pixels should separate two buttons on the screen.


    You said you struggle with looking at a blank page. Everyone does. If you follow the steps I outlined above, then you won’t be working with a blank page.



  • I guess if you spend all your time working with a laptop on a kitchen counter, this product can with that.

    … but WTF are you doing working on a kitchen counter? Get yourself a proper desk. Seriously. And if you’ve got a proper desk two or three large displays will provide better pixel density and a more comfortable work environment for a lot less money.

    I can get behind using it for mediation, gaming, watching videos, etc… but no way am I going to spend this kinda money on any of those use cases. I look forward to a future version that is an order of magnitude cheaper.