You might want to check if Windows is the culprit.
You might want to check if Windows is the culprit.
Or your “time clock earth sounds” app from the not so well policed appstore takes silent background screenshots, grayscales them and sends them to their host for OCR.
I agree this permission is annoying. But I differ in I feel it should be system controlled and can be invoked by apps that identify specific fields to be blocked, instead ofnjusy disabling it outright.
If this is the case for you (I have both in my house), I recommend putting your RokuTV behind a Pi Hole DNS. It will block the TV ad requests at a DNS level while letting content and video go through.
Your understanding of frame generation is incorrect.
Again let’s say a huge absurdly low FPS and a big frame window for example. 10ms between frames.
If your frame windows is 10ms. Frame 1 at 0ms and Frame 2 at 10ms. Frame generation is not just interpolation. That is what your new TV does when you activate motion smoothing and soap opera mode. This is not what framegen is, at all.
In frame generation the frame generation engine (driver or program) stores a motion vector array. This determines the trend line of how pixels are likely to change. In our example, the motion vectors for the ball indicate large motion in a diagonal direction let’s say, and the overall frame indicates low or no motion due to the user not swinging the camera wildly. The frame generation then uses frame 1 to make an estimate of a frame 1.5, and the ball does actually move in the image thanks to motion vector analysis. The ball moves independently of the scene itself due to the change in user camera, so the user can see the ball itself moving against the background.
So, in frame 1.5, the ball you are seeing, as well as the scene, have actually moved. Now, the user can see this motion, and lets say they didn’t notice it in frame 1. This means frame 1.5 is a chance for them to react! And their inputs go through sooner, reducing true latency by allowing them to react to in-game stimus faster. Yes, even if the frame is “faked”
In reprojection, at frame 1.5RP, again crucially there is not any new scene data. Reprojection is not using motion vectors it’s using the camera and geometry only. If the user isn’t moving the POV at all for example then the reprojection just puts the frame where it already was and the user waits the full 10ms before the ball appears to move. Even if the camera is moving, reprojection is going to adjust the scene angle relative to camera, the ball is not going to move within the overall scene. Again, consider if the ball is flying left, and the user walking left. The reprojection cannot move the ball left. If anything, if the reprojection is put on the existing scene geometry, the opposite would occur and the ball may even appear to move right or slow down due to paralax.
Reprojection uses old frame data and moves it like flat cards in 3d space, so the frame of the ball in scene the ball stays in position till frame 2. And can only be affect by camera motion that drives reprojection, not other rendering data. And what the user sees of the ball wouldn’t change until 10ms later. Only the overall flat scene can reprojection, so the user tilting the camera or swinging it can feel instantly responsive. But till the next render pass, the real motion data, delivered either via motion vector or frame 2, doesn’t his them in a reprojection on 1.5.
So again, your understanding of current frame gen is wildly incorrect. And what you are describing for reprojection getting better is essentially to add reprojection to framegen. And use motion vectors to render the new portion of the frame, and use the projection to adjust overall pov based on camera input. Which again, works well. Adding reprojection and Framegen together is not a bad idea. And reprojection is great for reducing perceived latency (why it is essential for avoiding motion sickness in VR). These are two techniques solving different forms of latency issues. Combined they offer far more.
Frame reprojection lacks motion data. It is in the title. It is reprojecting the last frame. Frame generation uses the interval between real frames, feeds in vector data, and estimates movement.
If I am trying to follow a ball going across the screen, not moving my mouse, reprojection is flat out worse. Because it is reprojecting the last frame, where nothing moved. Frame 1, Frame 1RP , then Frame 2. 1 and 1RP would have the ball in the exact same place. If I move my viewpoint, then the perspective will feel correct, viewport edges will blur and the reprojection will map to perspective which feels better for head tracking in VR. But for information delivery it is no new data, not even a guess. It’s still the same frame, just in a different point in space. Not till the next real frame comes in.
With frame generation, if I am watching this ball again, now it looks more like Frame 1 (Real), Frame 1G (estimate), Frame 2 (real) Now frame 1 and frame 1G have different data, and 1G is built on vector data between frames. Not 100% but it’s a educated guess where the ball is going between frame 1 and frame 2. If I move my viewpoint, it is not as responsive feeling as reprojection, but it the gained fake middle frame helps with motion tracking in action.
The real answer is to use frame generation with low-latency configurations, and also enable reprojection in the game engine if possible. Then you have the best of both worlds. For VR, the headset is the viewport, so it’s handled at a driver level. But for games, the viewport is a detached virtual camera, so the gamedev has to expose this and setup reprojection, or Nvidia and AMD need to build some kind of DLSS/FSR like hook for devs to utilize.
But if you could do both at once, that would be very cool. You would get the most responsive feel in terms of lag between input and action on screen, while also getting motion updates faster than a full render pass. So yes, Intel’s solution is a set in that direction. But ASW is not in itself a solution, especially for high motion scenes with lost of graphics. There is a reason the demo engine in the LTT video was extremely basic. If you overloaded that with particle effects and heavy rendering like you see in high end titles, then the smearing from reprojection would look awful without rules and bounding on it.
TBH, on ancient insecure systems that might work.
Not very exciting though
ITT: Ill will towards corporate America.
Like I want someone to goto some of these CEO retreats and really drive home with them, plainly, how much we wish they all just fucking died. Don’t care how, but the average citizen actually would rejoice.
And we have the receipts to prove it.
This is a hilariously bad take for anything not VR. async warping causes frame smearing on detail that is really noticable when the screens aren’t so close your peripheral blind spots make up for it.
Its an excellent tool in the toolbox but to pretend that async reprojection “solved” this kind of means you don’t understand the problem itself…
Edit: also the LTT video is very cool as a proof of concept, but absolutely demonstrates my point regarding smearing. There are also many, MANY cases where a clean frame with legible information would be preferable to a less latent smeared frame.
And you are missing my point.
You don’t trade one for the other. You add this in the options menu, in a smaller font.
Then when The Crew, X-Defiant, Lawbreakers, or any of the 30 other games that AAA publishers end server support for this year go down, the people who bought it aren’t left unable to play at all. Theres a fallback. And it does not affect matchmaking because it’s down the menu out of the way, and not the default matchmaking method. L
Sorry man, but the fundamental backend of IP based matchmaking is a prerequisite to skill based matchmaking. At a high level, the skill rankings make an ELO value or similar ranking and feed that alobg woth player status into the active player pool for the region. The active player pool then feeds the game client the ip sets for the current match.
Literally all these games are peer hosted, and require this. Once the match is setup they literally drop you into a lobby (this part is visible to you) and fill it with IPs (invisible). That is as old as DOOM.
So again, everything costs something as people aren’t free, but this is a function must exist to power the skill based matchmaking, and needs only be exposed in the shell.
Also, its not just valve, its literally every PC game ever made before the mid 2000s. Jedi Knight II? Unreal Tournament? Quake 3? Hell emulated PS3 and Switch titles have shown this off as well. All of these are still playable today thanks to not exclusively using skill based matchmaking.
Great wall of text, defeated by the simple idea of adding a fucking optional LAN or Lobby based matchmaking based on IP can be for unranked and takes near 0 effort to add.
You want the main game mode with matchmaking? Official ranked server.
Wanna play 2 vs 30 AK-47 vs knives only? Private lobby.
Also, the latter was how the original devs accidentally invented Left 4 Dead by figuring out that was a ton of fun.
I could use more pink pubes.
Fuck Google chrome.
Nice.
Yea I don’t trust any AI models for facts, period. They all just lie. Confidently. The smol model there at least tried and got it right at first… Before confusing the sentence context.
Qwen is a good model too. But if you wanted something to run home automation or do text summaroes, smol is solid enough. I’m using CPU so it’s good enough.
Here you go: Review of SmolVLM https://www.marktechpost.com/2024/11/26/hugging-face-releases-smolvlm-a-2b-parameter-vision-language-model-for-on-device-inference/
Model itself: https://huggingface.co/spaces/HuggingFaceTB/SmolVLM
And you can use Ollama to run it locally, and Open WebUI to access it in browser.
Try again. Simplified models take the large ones and pare them down in terms of memory requirements, and can be run off the CPU even. The “smol” model I mentioned is real, and hyperfast.
Llama 3.2 is pretty solid as well.
Building on this, and not being too hyperbolic about “realism”: he’s wearing a full body set of reinforced armor, that is almost certainly going to assist in compressing the wound and his injury buying a massive amount of of time to start with. Assuming for 5 seconds he slaps some quick clot into the hole one he get in The Bat, or before, then bleeding out wouldn’t be a main concern, notright away. Organ damage is his biggest risk, and if he avoided a direct stab into a kidney or something (the armor has gaps but still covers vitals), he could live if he’s lucky with some back alley sutures to his intenstine, etc.
So, him living isn’t the most insane thing to consider given his known resources and what he could likely have done in a few moments off screen. And over-explaining it in the moment would’ve killed the pacing of the film.
Thing is, for your average user with no GPU and whp never thinks about RAM, running a local LLM is intimidating. But it shouldn’t be. Any system with an integrated GPU, and the more RAM the better, can run simple models locally.
The not so dirty secret is that ChatGPT 3 vs 4 isn’t that big a difference, and neither are leaps and bounds ahead of the publically available models for about 99% of tasks. For that 1% people will ooh and aah over it, but 99% of use cases are only seeing marginal gains on 4o.
And the simplified models that run “only” 95% as well? They can use 90% fewer resources give pretty much identical answers outside of hyperspecific use cases.
Running a a “smol” model as some are called, gets you all the bang for none of the buck, and your data stays on your system and never leaves.
I’ve been yelling from the rooftops to some stupid corporate types that once the model is trained, it’s trained. Unless you are training models yourself, there is no need for the massive AI clusters, just for the model. Run it local on your hardware at a fraction of the cost.
I’m sticking with Floorp for now. Faster development doesn’t mean it will be sustainable, but we’ll see. Or necessarily that all feayures may be good. More iant better. Brave and Opera GX have a bunch of features too.
But I’m using Firefox and Floorp as a variant because I want something that gets fundamentals right. And if Zen becomes huge, great.
Dunno then my friend. It’s not been an issue for me on either OS. But I believe you of course. Good luck figuring it out