That ability is known as Dave😁
That ability is known as Dave😁
The problem I have with id7 is that it is bigger on the outside, smaller on the inside and not as fun to drive. Having said that, I recently got to drive a new Model3 and the changes the last 5 years has not done it any favours. Quieter, yes, but that’s about it for the positives.
I can’t come in today, I got a bad case of X… I don’t know, seems like a reason to self isolate already.
If he does die, do you think the establishment would tell anyone? They’d AI his looks, speech and Twitter posts and claim he’s done appearances on TV for years.
Ah, the kid brother defence. “But big brother did it, I had the right to!”
Still wrong! Someone else being shitty and prejudiced does not in any way, shape or form excuse your prejudice. I’m sorry you’ve had to face prejudice, but this way you are paying it forward.
Oh… There is another month until the solstice. It will get worse before it gets better.
My sincerest condolences.
There must be. Recall and info sec is mutually excluding by definition!
I’m just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.
I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.
Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
I need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?
Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?
Thank you for your detailed answer:) it’s 20 years and 2 kids since I last tried my hand at reading code, but I’m doing my best to catch up😊 Context window is a concept I picked up from your links which has provided me much help!
The problem I keep running into with that approach is that only the last page is actually summarised and some of the texts are… Longer.
Do you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I’ve tried to set it up, but the documents provided doesn’t seem to be analysed properly.
I’m trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.
For me it’s the opposite. My body may be on the premises, but mentally…
Well, that’s been the basis for some other products. AMD and Intel comes to mind😊 They both have IP the other need and historically Intel has been the dominant one, but now the tables have turned somewhat.
I feel that is the issue with this post to begin with. Is it a shitpost or not? Seems more like a load of hot air to me
Sure, better range is always nice, if that’s the case, but I didn’t drive it enough to be able to come to that conclusion. The power usage from previous owners was as expected, though.