PDA

View Full Version : "Unlimited Games Graphics Ability in 16 months"



AzureBlaze
Apr 29, 2010, 09:30 PM
Step 1: watch video


http://www.youtube.com/watch?v=Q-ATtrImCx4&feature=player_embedded

So this guy says that the software will cause unlimited graphics power for PC gaming in 16 months (the time they think it will take to let it out) That would be really great because it would end the polygons and the video card wars (which are expensive) but what's your take on the likelihood of this?

Yes he can build pyramids out of camels or ponies or whatever those are
But then I didn't see anything getting animated

The lighting seems kinda poor?
I don't know how lighting works at all so maybe it can be fixed (like with the Hedgehog Engine or something) and he's just not so hot at it as the HE.

Everything is dupe-ish
Look at all the things which are the same thing over and over. A grass, a branch, a rock...thing, those fancy camels etc. I'd bet it's an effect of "Yes we can put infinite cloudspoints but HO HUM to creating like 5 different stands/clumps/whatevers of grass, we want to make the video fast, we'll just stamp the same one over and over. It opens up (and necessitates) people to design lots of different ones of the same thing so it looks more realistic.

But then...
It does have a point on how it works without becoming some infinite loop, because really "All that will fit in your monitor" is all that it makes at one time. Really that makes a lot of sense. So does the google analogy because one could consider the web infinite points, yet Google doesn't take infinite time to fill up your monitor/the search result with items.

If this thing works they need to make a PSU with it because then the clothing will be even more awesome. What do you think?

Split
Apr 29, 2010, 11:30 PM
Ingenious...if this is truly in our not too distant future it's about to put a ton of people out of work, but it will be an enormous breakthrough that will save us money on upgrading computer hardware, and all the programmers will be able to divert their brain power towards terraforming Mars.

Rubius-sama
Apr 30, 2010, 02:53 AM
I think he's really saying unlimited detail on 3d objects, not unlimited rendering power for the PC, which is what it appears like he's trying to say. Then again he did bring up that "search" example, so I'm not exactly sure... If he really is trying to say this will allow unlimited processing power, then it sounds pretty impressive.

Palle
Apr 30, 2010, 03:10 AM
It's an interesting proposition, though I can't comment on how feasible the whole affair is... I'll leave that to more capable folks like you.

Still, if it's legit... I'm surprised ATi and nVidia haven't already collaborated to turn these guys into fish food.

Gibdozer
Apr 30, 2010, 09:38 AM
Sounds great, but so did NURBS 15 years ago! Its still a bit early to get excited about this tech.

Nitro Vordex
Apr 30, 2010, 05:49 PM
It's not unlimited, it's just the potential for much better rendering is skyrocketed. In reality, this takes way more processing power than the traditional polygons.

Split
Apr 30, 2010, 06:06 PM
I think he's really saying unlimited detail on 3d objects, not unlimited rendering power for the PC, which is what it appears like he's trying to say. Then again he did bring up that "search" example, so I'm not exactly sure... If he really is trying to say this will allow unlimited processing power, then it sounds pretty impressive.Well it is unlimited rendering power; it searches for points with tech that's similar in principle to Google, which searches for information, and can call up infinite points with ease the way google can call up infinite amounts of information at will

Zyrusticae
Apr 30, 2010, 06:12 PM
Until they actually show some animation in there (because obviously all games use quite a bit of it), I will remain skeptical.

It's neat stuff, but all they've shown so far are static models. And that's bad.

Nitro Vordex
Apr 30, 2010, 06:31 PM
They probably can't do much with the animation quite yet, plus it's still experimental. Though, if you consider that matter is made up of circles (or spheres), you could imagine it would be a lot smoother looking than polygons.

Only problem is, how would this work for consoles, if it ever does? The file size for a regular game would be humongor.

Powder Keg
Apr 30, 2010, 07:27 PM
Tighten up the graphics

http://www.blogcdn.com/www.joystiq.com/media/2006/01/game_designers.jpg

KodiaX987
Apr 30, 2010, 07:27 PM
Probably, but with hardware getting higher capacities for the buck, I wouldn't be surprised to see the good old gaming cartridge make a comeback... in the form of a high-capacity read-only Flash drive of sorts.

Strange; I didn't see much technical explanation - or rather, I saw 30 seconds of it, and the remainder of the 8-minute video consisted of a massive revolutionist "we'll tell you what the big names don't want you to know" sales pitch that makes me want to equate these guys to Steorn (http://en.wikipedia.org/wiki/Steorn).

So basically it seems those guys have come up with a way to shave down the amount of rendering to its barest minimum. Maybe it's true, and congrats if they did. The manner they describe to do it sounds interesting to say the least. I'd like to see the SDK and the code in action before doing anything however. I'll go so far as to say there is no proof of concept until one builds a basic FPS using that rendering method - some interactive soft in which you can move around, see animated objects, and use a generic plasma gun in your hands to bounce a few energy balls around and thus have some particle effects going. Theoretically, there will be a point in which the CPU will get bottlenecked from having to figure out so much stuff happening at once.

Chances are the reality of this future will be one in which we'll see some super-hyped physics-based game (much like what Crysis did with graphics) and a bunch of clones in the wake, until at least the next big graphics breakthrough.

Though looking at how things have evolved so far, my opinion on the matter is "people will see things will look nicer then promptly forget about it."

Ezodagrom
Apr 30, 2010, 10:18 PM
That project is kinda interesting, but it's funny how he keeps comparing that "unlimited detail" thing with outdated games and doesn't even mention tessellation.

http://www.youtube.com/watch?v=-uavLefzDuQ

http://www.youtube.com/watch?v=bkKtY2G3FbU

Also there can't be such thing as "unlimited". There will always be a limit (hard drive space limit, cpu/graphics card limit, and so on).

Myphys
May 1, 2010, 04:28 AM
From a technical stand point, it is impressive.
If I understood correctly, this is done (solely?) on software level, rather than graphics card.

In some ways, it can't be compared to any "current" games at all.
Current games rely heavily on the graphics card whereas this produces fairly high level of graphics in real time solely on the CPU. That's a big difference.
As for animated objects, who's to say developers can't mix between current technology with this?

I suspect this isn't new technology. Probably uses some sort of advanced math function/algorithm rather than storing each individual points of an object.

As for file size limitation, one thing that came to my mind was procedural generation.
Look up .kkrieger on wiki or youtube.
Note the file size of this game/demo of 96k. Not gigabytes or hundreds of megs.

Actually, most likely this software is doing similar to procedural generation where the function is doing some sort of "look-up" to find out what should be on the screen.

Biggest disadvantage is taking up (or taking back?) CPU time, just like in the good ol' days.

HAYABUSA-FMW-
May 1, 2010, 05:42 AM
If this thing works they need to make a PSU with it because then the clothing will be even more awesome. What do you think?
Nah, you'll just see twice the number of one color pastel mannequin gumby time people walking around, unloaded.--- SEGA! Since its what, for static, not skinned/rendered, lowest res. objects. Rather make a new PSO with it, Endless Nightmare missions, without the respawns loading. Bring it.

Could be fun for Dead Rising 3, zombies displayed onscreen as far as you can see. Of course most being far-far away for no interaction and doing all of nothing - not even shuffling around in place.

Dead Rising 1 zombies already lift a hand or two, walk a bit, and then can notice/add in some AI to act when you're training a sniper scope on them, even from a football field length away.

Blitzkommando
May 1, 2010, 01:26 PM
Skeptical doesn't even begin to describe how I feel about this. Graphics cards are becoming more diverse in their processing capability not to mention they are far and beyond more powerful than any CPU, albeit they are still in-order rather unlike the CPU. And while Nvidia has nixed double-precision in the GTX 400s (it's now a Tesla/Quatro only thing) the latest ATI cards can provide over 1 TFLOP of processing capability (5970, somewhere under 600 GFLOPS for the 5870). Even the king CPU out there, the i7 980X is limited to around 1/6th that and costs more than double. Point is, it doesn't sound like the rendering with this couldn't be done on a graphics card. And even if it needed double-precision calculations (which I doubt) modern graphics cards would just tear through it far more efficiently than even the latest hexacore chips.

Realistically, with so much focus going on ray tracing I feel that's going to be the next step. It's far more realistic in lighting than anything we have now. It's still a bit of a ways off yet but with graphics cards doubling performance every year to 18 months it's certainly closing in fast. The question then becomes if CPUs will be able to catch up. Or, probably a better question, will developers pick up on multi-threading sufficiently to keep the graphics cards fed with enough data for smooth gameplay.

Niered
May 2, 2010, 05:57 PM
I spoke with some friends about this, apparently this is mostly smoke up your ass.

It can render 3D static environments beautifully, yes, but unless you just want to walk around a pretty landscape, your not going to have any fun. Apparently its impossible for this tech to deal with anything that isnt static, so you wont have any enemies running around or anything moving at all.

KodiaX987
May 2, 2010, 06:43 PM
It can render 3D static environments beautifully, yes, but unless you just want to walk around a pretty landscape, your not going to have any fun. Apparently its impossible for this tech to deal with anything that isnt static, so you wont have any enemies running around or anything moving at all.

Hey guys, we found out which engine Myst VI is gonna use! :wacko:

Niered
May 3, 2010, 12:11 AM
Hey guys, we found out which engine Myst VI is gonna use! :wacko:

Basically, I mean, it makes sense if you think about it. The program is just figuring out what color each individual pixel should be, because its comparatively easy for it to know that when each point is fixed. The instant that anything other than the viewpoint changes its position or pose and the game has no clue what the fuck its doing.

Its all smoke and mirrors, the only thing that could benefit from this is maybe a visual novel or a fine artist.

Randomness
May 3, 2010, 12:42 AM
As far as rendering goes, it's my impression that graphics cards already only calculate out one point per pixel of resolution anyways, and then start over with the next frame. Nothing revolutionary there.


Also, this seems like it would take a hell of a lot of work to make things... I strongly suspect all the models they used were made traditionally and then converted somehow.

Animating would probably still be along bones, except instead of binding faces of a model... you're going to have to save a link data set for every single point, and its relative position from the bone's center.

So... for every point, the amount of data would be:
Color (32-bits here, including alpha channel - which would make this process hell, if you made a window out of this stuff)
Bone link (Probably small)
Coordinates (Also small)

Data use would be similar on color, since textures are images, and images get stored pixel by pixel. But the linking and coordinates... would cause a massive memory overhead for whatever is existing at a given time. I'm trying to figure out how you would cut down on that. Obviously you don't put points inside of stuff, only on the edge (Hey, much like models are today!).

In truth, this is really polygons that are very, very tiny. If you took a model and reduced the size of the triangles to 0, the count approaches infinity and you get something essentially the same as this. Making sure you don't trace visuals through gaps would be nasty, since you need to use as few points as possible for memory purposes.

Overall, I'm skeptical of this... The memory overhead should grow like crazy to try and hold all of this stuff at once... so you'd have to recreate everything as you did a pass... so you'd still, minimum, hold every structure in memory at least once, which is crazy data use still. If you don't store the exact dot structure of each object currently in use in RAM, you have to load from disc every frame, which is obviously not sustainable.

Truthfully, I think somethings up here. Also, nothing in those graphics struck me as particularly impressive. All they've really done is taken the textures, split them into pixels, and then saved those pixels as dots where they would be on the model.

Actually, I don't think developers would like this. It would make skinning hell. You couldn't just use Photoshop, you'd need a whole new program. Unless you're converting existing stuff, in which case I see no advantage, since you can't make anything more detailed than your modeling program's polygon count.

tl:dr Go back and read, I'm not helping you.

Edit: Read rest of thread. Yeah, only doing static environments explains nicely why this isn't blowing up from saving relative positional data and such for animations. Look ma! I reinvented pointillism!

Niered
May 3, 2010, 11:40 AM
Yeah, only doing static environments explains nicely why this isn't blowing up from saving relative positional data and such for animations. Look ma! I reinvented pointillism!

Pretty much.