This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually “open” or even currently “openable”. We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply don’t have the tools to actually “read” a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things aren’t that far ahead of the public when it comes to understanding what they’re doing or how to change their behavior.
This is something I think needs to be interrogated. None of these models, even the supposedly open ones are actually “open” or even currently “openable”. We can know the exact weights for every single parameter, the code used to construct it, and the data used to train it, and that information gives us basically no insight into its behavior. We simply don’t have the tools to actually “read” a machine learning model in the way you would an open source program, the tech produces black boxes as a consequence of its structure. We can learn about how they work, for sure, but the corps making these things aren’t that far ahead of the public when it comes to understanding what they’re doing or how to change their behavior.