• 0 Posts
  • 24 Comments
Joined 3 years ago
cake
Cake day: July 6th, 2021

help-circle
  • The name of the function, what goes in and what goes out in most cases should be enough to get a good idea on what the function does.

    It also helps to make a diagram of how everything ties together. Just boxes and arrows is enough.

    When writing your own code, it takes a bit of experience to know when to put something in its own function. It’s very obvious when you’re replicating code. It’s also very common to cut things up when a function gets too big. Look for bits of functionality that you can give a good name.









  • Absolutely true, although imo not necessarily as a conscious choice. It’s simply earth’s feedback systems. We cram all kinds of animals together and accidentally breed highly contagious viruses, which come back to bite us in the ass. We destroy biodiversity and now people in certain parts of the world have to manually pollinate crops, because the insects all died. Etc.

    If all goes well, earth will find a balance again that still favours life (possibly not human life). If not, earth might end up like Venus. Not a big deal for earth, but a bit of shame of all this life we currently see here.



  • Oh, it should absolutely be the team’s decision and you’re also totally right that Kanban requires a more mature team. People indeed need to be able to recognise and ask for help when they’re stuck (which means being vulnerable, but also simply being able to formulate the right questions). People also need to be able to give feedback to their team members when they feel or see that someone is struggling or not delivering enough.

    To facilitate I always have some form of retrospective in my teams, even when doing Kanban. Sometimes only once every other month, sometimes every two weeks. Highly depends on the maturity of the team and customer.


  • I work in a company where we say that everyone is an expert (and to a very large extent this is really true). We create teams of experts, including more business savvy people. Everyone respects each others expertise and makes sure they can apply it as best as possible. We don’t infringe upon each other’s expertise. We might ask another expert about the why or the how, but we should not assume we know better. Obviously this happens sometimes, but then we remind each other that we’re all experts and that an engineer wouldn’t like to be told by marketing how to do their job either.

    I think this fits nicely with ‘stay in your lane’ and actually makes it easy to remind people to do so. It’s in the core values of the company that people excel in their lane and cooperate with people in other lanes.


  • I would even argue that points, stories and sprints are not things you need. If you go kanban, you don’t need sprints. You still need to be producing and you probably want to get a feel for complexity so you can prioritize, but that can be done without points.

    Stories are also very scrum specific and you can turn them into whatever format you want. I usually still call them stories, but they’re basically just a little card that describes the context (why do want something) and the deliverables (what will be implemented to meet that want).




  • Correct, it’s not just regurgitating words, it’s predicting which token comes next. A token is sometimes a whole word, but for longer ones it’s part of a word (and some other rules that define how tokenization works).

    How it knows which token comes next is why the current generation of LLMs is so impressive. It seems to have learned the rules the underpin our languages, to the point that it seems to even understand the content. It doesn’t just know the grammer rules (without anyone telling it, it just learned the patterns), it also knows which words belong to each other in which context.

    It’s your prompt + some preset other context (e.g. that it is an OpenAI LLM) that creates that context. So being able to predict a token correctly is one part, the other is having a good context. This is why prompt engineering quickly became a thing. This is also why supporting bigger contexts is another thing (but a larger context requires way more processing power, so there’s a trade-off there).

    It’s btw not just the trained model + context that gives you the output of ChatGPT. I’m pretty sure there are layers before and after, possibly using other ML models, that filter content or make it more fit for processing. This is why you can’t ask it how to make bombs, even though those recipes are in its training set and it very likely can create a recipe based on that.




  • I started a little over half a year ago with Go, coming from Python like the author. I definitely enjoy working in a strongly typed language and Go is usually quite fun to work with. This week I’m actually implementing a concurrency pattern for a ‘real’ problem, so eager to see how that works irl. I’ve yet to come across something where generics really make sense, but definitely curious to explore that with a real case as well.