zaptrem a day ago

World Labs’ (Lee’s company) stated goal is to “[build] Large World Models to perceive, generate, and interact with the 3D world.”

Imo video models are the closest thing we have to “spatial intelligence.” They generate in three dimensions (2D images + time) scale just like image and probably language models, and given the right controls can model 3D worlds (https://gamengen.github.io/) interactively. Not sure there’s a need to directly model polygons or point clouds (assuming that’s what they’re trying to do?) when there’s so much video data to enable massive scaling of video models. I expect soon we’ll see video models being used as planners for robotics as well.

namaria a day ago

Let's just say AGI is a thing that is possible to build.

Why would anyone think they can own, boss around and rent out an intelligent entity?

It can't be goaded with threats of starvation and exposure. I wonder why anyone would think that manufacturing an intelligent entity would result in a benevolent and super productive slave mind.

  • ZeroGravitas a day ago

    Even weirder, the current market leader was founded on exactly the opposite thesis, that the god they make would kill us all and they needed to protect us from that eventuality.

    Then they pivoted to profiting from this civilisational danger instead, which is itself a danger sign they warned against:

    https://openai.com/index/introducing-openai/

    > it’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

    > Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

  • Ukv a day ago

    > Why would anyone think they can own, boss around and rent out an intelligent entity?

    Do we not already do this with existing biological intelligence, like horses?

    > It can't be goaded with threats of starvation and exposure

    I think reason we do this for biological intelligence is because evolution has made the entity act in accordance with self-preservation, which isn't in itself a useful goal to us, so we set up conditions such that completing our own goal is a proxy of that (the way to get food is doing useful work).

    For artificial intelligence we can instead directly set the loss function that the model is created to act in accordance with. We can also still similarly still set up proxy goals - like how with an LLM its immediate design is to generate sensible next tokens, but we can set up context such that the way to complete that task is by fulfilling the user's request to generate a poem.

    • lm28469 a day ago

      > Do we not already do this with existing biological intelligence, like horses?

      And the vast majority of humans

  • lm28469 a day ago

    > It can't be goaded with threats of starvation and exposure.

    That already seems to imply that AGI would somehow be closer to a God than a program. Otherwise it's easy, you just switch it off

    • namaria a day ago

      I imply nothing. We will have to wait some actual AGI to see if it would be made to do work on threat of being switched off.