• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 10th, 2023

help-circle


  • Favorite for quick tasks: javascript, the last few years of ecmascript features make it an incredibly productive language.

    Favorite for hobby stuff: rust, but with caveats. I miss default parameters, I dislike the syntax soup, the async system has too many "standards" (see xkcd on competing standards)

    Favorite for work: javascript/typescript. Having my team be fully capable of working on any part of our competencies with just one language is huge. Sharing code between front end and backend, across products, and easily finding developers all make it an easy choice.

    Least favorites:

    Php: magic quotes? Golang: using casing to establish public vs private? Objective-C: the worst combo of every one of it's predecessors Java: forcing the paradigm of everything is an object causes so much boilerplate Vb5/6/a: triggering a button with = True, using a single equals for both assignment and equality, callbacks are an absolute nightmare


  • It’s a bit complex, and you can find a better answer elsewhere, but a model is a set of “weights” and “bias” that make up the pathways of the neurons in a neural network.

    A model can include other features but at its core it gives users the ability to run an “ai” like gpt, though models aren’t limited to only natural language processing.

    Yes, you can download the models and run them on your computer, generally there will be instructions in each repository, but in general it involves downloading the model which can be very large and running it using an existing ml framework like pytorch.

    It’s not a place for the layman right now, but with a few hours of research you could make it happen.

    I personally run several models that I got through huggingface on my computer, llama2 which is similar to gpt3 is the one I use the most.



  • I went all out and got the 192, I’ve been using it to run local machine learning models successfully. Llama2 70b runs fairly well after quantizing to 16 instead of the original 32 which ate all 192GB and 40GB of swap before running out of system memory. Smaller models like the llama2 7b are wicked fast.

    Performance as far as normal development goes is simply divine, I can have basically every project I ever work on open on my dual 4k monitors without any slowdown ever. Simultaneously compiling and running models in the background without a stutter.

    My biggest complaint so far is with my thunderbolt 4 dock not supporting 144hz my monitors can crank out.

    I have had one system crash so far, not sure of the cause, but overall stability has been impeccable.

    I’m used to x86 machines, one flaw with the apple silicon switch in general is that some of my react native libraries were compiled in a way that make it difficult to compile without rosetta, that’s obviously not apple’s problem, nor is it specifically a studio issue.

    9k was incredibly painful, but I’m happy to have a machine that outperforms most retail machines on the market for vram and machine learning without spending even more.