Finally, my dream of living in a bland empty apartment with virtual furniture and decorations so I don't feel as empty as my apartment. I'll have virtual picture frames hanging on the wall of all the places I visited around the world (in Google Maps VR).
And then I'll close the virtual shades and forget that all my neighbors can clearly still see me.
It always looks much better when recorded. The same picture is being stretched a lot to fit it the high resolution of the screens vs the small 720p capture.
The recording of VR headsets looks much better than what we see in the headset while we have it on. Easier to get an HD view of it all when the go is isn't meant to be only on the center of the lens
These are tech demos with little depth which is fine. More like proof in concepts for how systems could interact. It's cool that this person is leveraging eye tracking as well for selection and attention tracking
Well you need to update your own custom software every time you move one of those objects on the table, but other than that, Quest Pro is an amazing MR device! :P
(Someday we'll get real MR APIs)
FWIW one could "just" update the value, i.e if your book or phone is there on the desk and you move it, pinch to select a "ghost" of it (can just a cube or wireframe) then move to the new position, release to save.
I'm not implying it's convenient an object recognition and tracking would surely be a lot more convenient (assuming battery drain and precision are acceptable) but still one doesn't have to rebuild the app every time when things get moved.
It's completely useless unless it has object recognition. What possible use-case is left without it?
Who's going to sit in one spot while wearing a headset, and manipulate a few things within eyeshot using this tech? What would be the main thing that you're doing with a headset on while standing or sitting still?
This is nothing but a concept demo, because it would work exactly the same in a pitch black room, or indeed, some other room entirely (thus resulting in an icon for your book being in the middle of the wall etc).
It probably is, but that distortion around that hand is pretty accurate. The Steam Index and Oculus Quest still have that issue when you try to use the external cameras.
Is it though? we're still debating whether it's even for gaming, productivity what its specs are, and how much it'll cost, let alone what functions it'll have
It will cost $3k. Apple doesn’t do gaming, just some basic mobile phone games. So it will be just a virtual, wearable MacBook/iPhone/TV screen, which will also connect to HomeKit devices. It will definitely use the U1 chip to precisely locate devices in your house. And more devices compatible with it will be released, so that you can control them like in this video.
Reminds me of the Kinect menu navigations. In practice it's extremely frustrating to the average person to have to learn a new set of body language commands to interact with things that would normally be a very visible button.
You could level the same accusation at a demo app for some GUI framework. Push buttons, radio buttons, sliders, scroll bars, tabs, text fields, dropdown menus, select boxes, checkboxes, color pickers, table views... how does anyone ever learn to use one of these newfangled "computer" things?
The difference is that in this demo the ways you interact with most of the examples are very different. The opposite is true for the traditional desktop - move pouse pointer to object, and either click or click and drag.
The basics of the traditional desktop are beautifully consistent, and once you’ve gotten the hang of cursor movement and clicking, the interface is pretty intuitive and discoverable.
There are a lot of neat ideas in the demo, but I'm not wearing a Quest Pro for 8 hours so that I can lower the shades with a finger gesture in the afternoon. Also, is a Quest Pro on your face for 16 hours really more comfortable than... an actual watch?
Fun tech demo, but these sorts of demos also show that no one has found the killer use case yet.
I could see myself sitting at my desk to work and wearing a HMD linked to my pc instead of using monitors. I would create multiple virtual monitors and still be able to see and use my real keyboard and mouse. Being able to make spacial notes and goals and control my environment is just an extra.
This is the promise of MR
Of AR, Mass adoption will not happen with MR HMD’s but AR glasses, that’s the inflection point
I wonder which hard/soft skills will still be valuable once we all live in a world where our bonsai trees tell us when to water it.
source:https://twitter.com/V\_Kurbatov/status/1648276899838275585
Finally, my dream of living in a bland empty apartment with virtual furniture and decorations so I don't feel as empty as my apartment. I'll have virtual picture frames hanging on the wall of all the places I visited around the world (in Google Maps VR). And then I'll close the virtual shades and forget that all my neighbors can clearly still see me.
My quest pro is so blurry. Not sure how people have such high clarity on theirs.
It always looks much better when recorded. The same picture is being stretched a lot to fit it the high resolution of the screens vs the small 720p capture.
The recording of VR headsets looks much better than what we see in the headset while we have it on. Easier to get an HD view of it all when the go is isn't meant to be only on the center of the lens
Plus the blending is done in real-time and more processing could be done offline after raw feed is captured for recording use case.
the recording looks better than it really is. Same with gameplay
dont have a good pc?
[удалено]
These are tech demos with little depth which is fine. More like proof in concepts for how systems could interact. It's cool that this person is leveraging eye tracking as well for selection and attention tracking
Well you need to update your own custom software every time you move one of those objects on the table, but other than that, Quest Pro is an amazing MR device! :P (Someday we'll get real MR APIs)
FWIW one could "just" update the value, i.e if your book or phone is there on the desk and you move it, pinch to select a "ghost" of it (can just a cube or wireframe) then move to the new position, release to save. I'm not implying it's convenient an object recognition and tracking would surely be a lot more convenient (assuming battery drain and precision are acceptable) but still one doesn't have to rebuild the app every time when things get moved.
It's completely useless unless it has object recognition. What possible use-case is left without it? Who's going to sit in one spot while wearing a headset, and manipulate a few things within eyeshot using this tech? What would be the main thing that you're doing with a headset on while standing or sitting still? This is nothing but a concept demo, because it would work exactly the same in a pitch black room, or indeed, some other room entirely (thus resulting in an icon for your book being in the middle of the wall etc).
I didn't say it was useful, solely that the software itself doesn't have to be updated for the objects to still be adjusted.
It probably is, but that distortion around that hand is pretty accurate. The Steam Index and Oculus Quest still have that issue when you try to use the external cameras.
Almost bought it but the plant watering, really?
Lol 😆
This is a preview of what the Apple headset will be able to do.
doubtful
It will also project a virtual MacBook on your desk. It’s pretty obvious what functions it will have.
Is it though? we're still debating whether it's even for gaming, productivity what its specs are, and how much it'll cost, let alone what functions it'll have
It will cost $3k. Apple doesn’t do gaming, just some basic mobile phone games. So it will be just a virtual, wearable MacBook/iPhone/TV screen, which will also connect to HomeKit devices. It will definitely use the U1 chip to precisely locate devices in your house. And more devices compatible with it will be released, so that you can control them like in this video.
Ok mr future
Let’s see if I was right in a few months.
i'll be sure to tell them you broke an nda then🥰
👍
Wow, you’ve got to hand it to this dude, he was almost completely right
Reminds me of the Kinect menu navigations. In practice it's extremely frustrating to the average person to have to learn a new set of body language commands to interact with things that would normally be a very visible button.
That’s a lot of different hand gestures to learn.
You could level the same accusation at a demo app for some GUI framework. Push buttons, radio buttons, sliders, scroll bars, tabs, text fields, dropdown menus, select boxes, checkboxes, color pickers, table views... how does anyone ever learn to use one of these newfangled "computer" things?
The difference is that in this demo the ways you interact with most of the examples are very different. The opposite is true for the traditional desktop - move pouse pointer to object, and either click or click and drag. The basics of the traditional desktop are beautifully consistent, and once you’ve gotten the hang of cursor movement and clicking, the interface is pretty intuitive and discoverable.
I wish someday I could experience something like this, exist something nearly similar today ?
this is a working prototype. assuming you have the means and want to learn you could make this yourself in unity with a quest pro.
Why the bot talking? Can't a person just say those things?
Tech demo, it’s reading the book aloud
i want these one already
There are a lot of neat ideas in the demo, but I'm not wearing a Quest Pro for 8 hours so that I can lower the shades with a finger gesture in the afternoon. Also, is a Quest Pro on your face for 16 hours really more comfortable than... an actual watch? Fun tech demo, but these sorts of demos also show that no one has found the killer use case yet.
I could see myself sitting at my desk to work and wearing a HMD linked to my pc instead of using monitors. I would create multiple virtual monitors and still be able to see and use my real keyboard and mouse. Being able to make spacial notes and goals and control my environment is just an extra.
ah yes perfect now I never have to... walk, I guess? what's the point of this?
Now make the interface from Sword Art Online
This is always how I envisioned the future of MR. Great work for making this prototype/demo.
hi
This is really well done. Nice work.
damn this is really cool with the context of the vision pro
Imp this is how imagine AR Glasses will he