I use gitlab ci mainly and dabble in github actions. Can you clarify how “Not even Github managed to pull that off”? IIRC, actions is quite featureful and it’s open-source, so I assume that can be run with self-hosted runners as well.
I use gitlab ci mainly and dabble in github actions. Can you clarify how “Not even Github managed to pull that off”? IIRC, actions is quite featureful and it’s open-source, so I assume that can be run with self-hosted runners as well.
fascinating. I wonder where the line is between the cold preserving the body and the cold causing hypothermia that could lead to death.
what are the other alternatives to ENV that are more preferred in terms of security?
yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.
edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.
I’m curious, why is this bot currently being downvoted for almost every comment it makes?
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
this is interesting, but it’s not open source yet? Couldn’t find the code. I only saw the author saying that the intent is to be open source.
I think apps like this is really interesting and could really benefit from selfhosting (either/both the LLM or the app deployment), especially due to the potential security/privacy issues, as well as lock-in issues with OpenAI.
got into coding cuz I found out that’s how I can automate analysis and play with research questions more easily.
I think many have also been wondering about version control of legislation/law documents for some time as well. But I never understand why it’s not realized yet.
such a rad pic!
Great explanation. Two question, what’s the likelihood of an SSO page being spoofed? This seems like an all-eggs-in-one-basket sitch, so what are the potential threats to this?
Let me see if I get your point. Are you saying most questions on Lemmy ask for opinions, which makes them look like they are asked to use for training AI models?
If so, I’m not entirely sure I agree. There’s tons of info online about any given topics, which can be very overwhelming. Maybe that causes people to prefer to seek out personal experience and opinions from others on such topics, rather than just hard cold facts.
It may also depend on which communities the questions you’re sampling are asked as well.
thanks for your answer! Is this same or different from indexing to provide context? I saw some people ingesting large corpus of documents/structured data, like with LlamaIndex. Is it an alternative way to provide context or similar?
I know nothing about “in context learning” or legal stuff, but intuitively, don’t legal documents tend to reference each other, especially the more complicated ones? If so, how would you apply in context learning if you’re not aware which ones may be relevant?
When you find one or successfully train one, I’d love to know as well. Maybe you can crosspost this on?
I saw this dataset on HuggingFace, does it fit your use case? https://huggingface.co/datasets/lexlms/lex_files
Here are some options:
I think the bias issues will always be there, but usually worsened, less detected (or delayed detection), and exacerbated when the people working on the original problem do not suffer such issues. Eg: if most people working on facial recognition are white and male.
While I do have my reservation with AI technologies, I think this is a worthwhile effort that the people encountering the same issues work to identify and address them, especially in this case they lead the effort, rather than just be a consultant on it.
They can lead the effort on collecting new data, or adapt new ways of looking at data, metricizing objectives in a more appropriate manner for the targeted audience. Based on the article, I think they are doing this.
one case is when one is learning, experimenting and innovating.
To follow through with the analogy a bit, trying to craft the wheel can have give insights and more robust understanding of how the commercial wheels work, especially in tandem with the complex systems that the wheels operate in, e.g. engines, motors, different types of environments and their effects on the wheels, etc. This may better prepare us for when things break or need to be adapted to different environments that the commercial ones are not specifically designed for in the first place.
However, this does not mean the wheels one re-invents can easily replace the ones that have been stress-tested by many, especially the wheels in more critical situations.
An example is encryption implementation. Playing around with it is fun, educational, insightful. If you do research in crypto, by all means, play with the wheels, pull it apart, physically and mathematically.
But, unless you really really know what you’re doing, trying to cook one’s own implementation in an actual product to be offered for customers is, almost always, a promise for future data breaches. And this has happened in real world many times already, I believe.
For search engines, I’ve never used it but there’s whoogle that’s supposed to be proxy for google search.
Wonder how the survey was sent out and whether that affected sampling.
Regardless, with -3-4k responses, that’s disappointing, if not concerning.
I only have a more personal sense for Lemmy. Do you have a source for Lemmy gender diversity?
Anyway, what do you think are the underlying issues? And what would be some suggestions to the community to address them?