• 0 Posts
  • 690 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle




  • I’d actually be surprised if Apple pays anything to OpenAI at the moment. Obviously running some Siri requests through ChatGPT (after the user confirms that’s what they want to do) is quite expensive for OpenAI, but Apple Intelligence doesn’t touch OpenAI servers at all (just Siri has ChatGPT integration).

    Even then, there’ll obviously still be a lot of requests, but the problem OpenAI has is that they aren’t really in a negotiating position. Google owns Android and so most phones default to Gemini, instantly giving them a huge advantage in marketshare. OpenAI doesn’t have its own platform, so Apple having the second largest install base of all smartphone operating systems is OpenAI’s best chance.

    Apple might benefit from OpenAI but OpenAI needs Apple way more than the other way around. Apple Intelligence runs perfectly fine (I mean, as “perfectly fine” as it currently does) without OpenAI, the only functionality users would lose is the option to redirect “complex” Siri requests to ChatGPT.

    In fact, I wouldn’t be surprised if OpenAI actually pays Apple for the integration, just like Google pays Apple a hefty sum to be the default search engine for Safari.


  • Apple Intelligence isn’t “powered by OpenAI” at all. It’s not even based on it.

    The only time OpenAI servers are contacted is when you ask Siri something it can’t compute with Apple Intelligence, but even then it clearly asks the user first if they want to send the request to ChatGPT.

    Everything else regarding Apple Intelligence runs either on-device or on their “Private Cloud Compute” infrastructure, which apparently uses M2 Ultra chips. You then have to trust Apple that their claims regarding privacy are true, but you kind of do that when choosing an iPhone in the first place. There’s some pretty interesting tech behind this actually.







  • CUDA is a proprietary platform that (officially) only runs on Nvidia cards, so making projects that use CUDA run on non-Nvidia hardware is not trivial.

    I don’t think the consumer-facing stuff can be called a monopoly per se, but Nvidia can easily force proprietary features onto the market (G-Sync before they adapted VESA Adaptive-Sync, DLSS etc.) because they have such a large market share.

    Assume a scenario where Nvidia has 90% market share and Nvidia cards would still only support adaptive sync via their proprietary G-Sync solution. Display manufacturers will obviously want to tailor to the market, so most displays will release with support for G-Sync instead of VESA Adaptive-Sync. 9 out of 10 customers will likely buy a G-Sync display as they have Nvidia cards. Now everyone has a monitor supporting some form of adaptive sync. AMD and Nvidia release their new GPU generation and isolated (in this hypothetical scenario), AMD cards are 10% cheaper for the same performance and efficiency as their Nvidia counterparts. The problem for AMD here is that even though per $ they have the better cards, 9 out of 10 people would need new displays to get adaptive sync working with an AMD card (because their current display only supports the proprietary G-Sync), and AMD can’t possibly undercut Nvidia by so much that the customer can also buy a new display for the price difference. This results in 9 out of 10 customers going for Nvidia again.

    To be fair to Nvidia, most of their proprietary features are somewhat innovative. When G-Sync first came out, VESA Adaptive-Sync wasn’t really a thing yet. DLSS was way better than any other upscaler in existence when it released and it required hardware that only Nvidia had.

    But with CUDA, it’s a big problem. Entire software projects that just won’t (officially) run on non-Nvidia hardware so Nvidia is able to charge whatever they want (unless what they’re charging is more than the cost of switching to competitor products and importantly porting over the affected software projects).