Thanks for sharing, I’m always interested in these topics
I do not think so. Your directional hearing depends on the shape of your face too. This means that everybody hears directions differently, because their face is different. I think people should be able to adapt to these smaller changes over time.
But I’m not sure what I’m saying is true. Games usually have binaural sound, but it may not be effective for some people, because the head-related transfer function (HRTF) that the game uses sounds different compared to what somebody would hear irl.
I agree, but its better than nothing. I’m sorry I mislead you.
Gpt4all is avalible on linux, it is an open source software that can run LLMs locally.
I did manage to write a back-propogation algorithm, at this point I don’t fully understand the math behind back-propogation. Generally back-propogation algorithms take the activation, calculate the delta(?) with the activation and the target output (only on last layer). I don’t know where tokens come in. From your comment it sounds like it has to do something in a unsupervised learning network. I am also not a professional. Sorry if I didn’t really understand your comment.
I have experience in creating supervised learning networks. (not large language models) I don’t know what tokens are, I assume they are output nodes. In that case I think increasing the output nodes don’t make the Ai a lot more intelligent. You could measure confidence with the output nodes if they are designed accordingly (1 node corresponds to 1 word, confidence can be measured with the output strength). Ai-s are popular because they can overcome unknown circumstances (most of the cases), like when you input a question slightly different way.
I agree with you on that Ai has a problem understanding the meaning of the words. The Ai’s correct answers happened to be correct because the order of the words (output) happened to match with the order of the correct answer’s words. I think “hallucinations” happen when there is no sufficient answers to the given problem, the Ai gives an answer from a few random contexts pieced together in the most likely order. I think you have mostly good understanding on how Ai-s work.
<sigma explanation rule>