Real or Fake? (in Off-topic)


AdminTitan [The Sky Forge] August 3 2011 3:13 PM EDT

Sickone August 3 2011 3:22 PM EDT

Real, but grossly misrepresented.

"Pixel cloud" technology is nothing new - we used to call it "voxels", and games used it since before 1998 (like, say, the terrain in "Delta Force 1").
Voxel rendering has insane detail advantages when it comes to still image fidelity, however it really suffers as soon as you try to make anything animated with it.

IF they even manage to make anything useful out of it as far as animation goes, that would be great, but until then, I would say "wait and see".

AdminTitan [The Sky Forge] August 3 2011 3:29 PM EDT

Yeah, I familiar with it, the only thing that's truly amazing is that they claim that the video they show you is real time rendering. Which is the part I'm really questioning.

Sickone August 3 2011 3:31 PM EDT

http://www.kotaku.com.au/2011/08/infinite-detail-and-euclideon-the-elephants-in-the-room/

Sickone August 3 2011 3:41 PM EDT

Ah, that video is almost certainly a real-time rendering, but it doesn't say on what hardware, and it's certainly not fully optimized yet :)

As they say, they take a few shortcuts with a "smart" way of quickly discarding large chunks of irrelevant data, so rendering can be quite fast.
Look at http://voxels.blogspot.com and get the demo yourself about what voxel tech can already do, on your own machine, right now - and it's not even really new stuff at all, like I said before.
And that's not very optimized voxel tech.

Also, it's not really "UNLIMITED" detail.
You still go down in detail only to the "voxel" size, and when you have a very small voxel size, you have a LOT of data, unless you can put it on a so-called "sparse voxel tree" (which basically means, you do a lot of copypastes for an object - but then again, that's what current games with polygons already do anyway).
However, the more actual non-repetitive detail you get, the more punishing the memory requirements become... and with animation it gets multiplied by a HUGE factor.

I'm not saying the tech can't work, I'm just saying it does need a LOT of work for it to become feasible.

Quyen August 3 2011 3:58 PM EDT

well, detail matter for me just like 1/10th of all, cause i dont look at rocks when i have to run away from some monster or so :|

Lord Bob August 3 2011 4:05 PM EDT

well, detail matter for me just like 1/10th of all, cause i dont look at rocks when i have to run away from some monster or so
It helps to create a more immersive experience.

And spell check doesn't like "immersive." Yeah, it's a word.

Quyen August 3 2011 4:06 PM EDT

what is immersive?

Shadow xEclipse August 3 2011 4:07 PM EDT

I know, right!?

*Marcus Fenix running from a giant landslide of city buildings.. Stops to say "Yo Dom, look at how real this decapitated head looks! Its just... awesome!"
*Landslide crushes Marcus*

Haha, graphics are always good.. But not everything. Exactly why I play TM and NES games. ;P

Lord Bob August 3 2011 4:20 PM EDT

https://secure.wikimedia.org/wikipedia/en/wiki/Immersion_%28virtual_reality%29

AdminTitan [The Sky Forge] August 3 2011 4:21 PM EDT

I think they're running this on a 360 sickone.... ah jk jk.

Yeah, I know they've got a long way to go, but the software developer and math nerd in me still finds this very very cool.

Unappreciated Misnomer August 3 2011 7:41 PM EDT

This is very cool, i would love to see this level of detail in games, Though that means alot of games will take more time and thus cost more. atleast thats what the developers will tell us.

The only logical next step for this(in my mind) would be that the static objects wouldnt be static but every object interactive from the soil beneath your feet to the debris flying through the air from a rocket blast on one of those elephant statues. If only there were a portal gun to watch the physics of syphonning soil from the ground and watching it fall on a file far away. serves no purpose but to eat memory.

At first glance I was thinking this was a fancy minecraft because the terrain was so square.

AdminTitan [The Sky Forge] August 3 2011 9:26 PM EDT

Yeah, more interactive environments will be cool, although I'm hoping BF3 makes some nice headway in that direction.

Sickone August 4 2011 11:52 AM EDT

Look at this game engine made by a few Slovakians:
http://www.atomontage.com/

AdminShade August 4 2011 12:28 PM EDT

Nice new technology but I see a lack of shades? :(

ScrObot August 4 2011 1:51 PM EDT

Notch of Minecraft fame rips into it: http://notch.tumblr.com/post/8386977075/its-a-scam
and
http://notch.tumblr.com/post/8423008802/but-notch-its-not-a-scam

AdminQBVerifex August 4 2011 2:02 PM EDT

It's a very interesting concept, the hardware and software needed to make this into a fully functional game are almost there. I think the only problem with the video is that much of what you see on the video is procedurally generated landscape, which means that having unlimited detail is still limited by how much actual hard drive space you have for storing all that content.

You can't obviously have millions of pebbles all with their own separate textures without having a gigantic hard drive to store all of this information. In addition, the limitation on how much crap is animated and moving together at the same time hasn't been proven with this video yet. There aren't any animations really testing this very much yet, so I would wait till we see some of that. It's very possible this is the future, but until you see creatures and other crap flying around in this environment, this is still a first step.

Lochnivar August 4 2011 2:38 PM EDT

Gotta say, I had very different expectations based on the thread title.

Not gonna lie, kinda disappointed.

In fact, I am going to have to (quite ironically) label this thread a 'bust'.

Admindudemus [jabberwocky] August 4 2011 2:59 PM EDT

loch, don't be a boob!

ScrObot August 4 2011 3:09 PM EDT

Thanks for keeping me abreast of the situation, guys.

A Lesser AR of 15 [Red Permanent Assurance] August 4 2011 3:28 PM EDT

A very stimulating discussion of polygons.

Sickone August 4 2011 6:26 PM EDT

Eh, in 5-6 years we might actually see it running in some mainstream games.

By then, we'll also have computers that can run even more than one order of magnitude faster than anything today, with 32 GB of system RAM and 8 GB of video RAM being the day-to-day norm, and "hardcore gamers" sporting 16+ core at 5+ GHz 10nm fab Intel CPUs, 128 GB system quad-channel DDRAM5 at 4+ GHz, four-way SLI mode NVIDIA GeForce GTX 980s with 24 GB of video DDRAM6 (or even DDR7) at maybe 8+ GHz and over 4k CUDA cores at over 1 GHz each, games with as much detail and life-likeness similar to the animations in "Avatar" running in FullHD at 60+ FPS real-time on those machines and maybe more.

You think I'm kidding or exaggerating ?
Just wait and see.

The "core creep" is inevitable since you can only go so far with per-core frequency, Intel has presented the processor road-map detailing progress up to 2017, next year the first 3D transistor processors are rolling out (which should make higher core frequencies possible without insane cooling systems), NVIDIA also has a similar road-map out for the next few years detailing similar advancements, memory is getting cheaper and faster like crazy.
Barring another massive economic meltdown (which is sadly almost unavoidable somewhere in the next 10 years, just hope it will be later rather than sooner), we just might see that happening by then.

Sickone August 4 2011 6:29 PM EDT

THIS is already possible with 3x GTX 580 in real time right now :



It's an Unreal Engine 3 tech demo.

Sickone August 4 2011 6:33 PM EDT

P.S. Mild PG warning : has some scenes of graphic violence about mid-way through (blood splattering from gun shots and similar).

Duke August 5 2011 8:03 AM EDT

Eh, in 5-6 years we might actually see it running in some mainstream games.

By then, we'll also have computers that can run even more than one order of magnitude faster than anything today, with 32 GB of system RAM and 8 GB of video RAM being the day-to-day norm, and "hardcore gamers" sporting 16+ core at 5+ GHz 10nm fab Intel CPUs, 128 GB system quad-channel DDRAM5 at 4+ GHz, four-way SLI mode NVIDIA GeForce GTX 980s with 24 GB of video DDRAM6 (or even DDR7) at maybe 8+ GHz and over 4k CUDA cores at over 1 GHz each, games with as much detail and life-likeness similar to the animations in "Avatar" running in FullHD at 60+ FPS real-time on those machines and maybe more.

REPLY: A order of magnitude is a bit too much, the industry as a whole is much more concern into reducing power consumation that increasing performance. Intel 22nm process coming in Q1 2012 reflect that. The % of transistor that getting dedicate to IGPU is also growing.


AdminTitan [The Sky Forge] August 5 2011 10:39 AM EDT

2*2*2 = 8

So, according to moore's law an order of magnitude isn't too far off.

Untouchable August 5 2011 4:27 PM EDT

hope it is

Duke August 5 2011 4:54 PM EDT

Moore laws say that the amount of transistor double every 2 year. Its never mention double the performance.

A Lesser AR of 15 [Red Permanent Assurance] August 5 2011 5:53 PM EDT




Transistors for the win.

Sickone August 6 2011 2:01 AM EDT

Moore laws say that the amount of transistor double every 2 year. Its never mention double the performance

Moore's Law is not really a law, more of a hardware developer guideline and self-fulfilling prophecy.
And we HAVE been going through almost two orders of magnitude as far as processing power goes in the past ten years or so : the passmark rating of an Intel Core i7-2600K @ 3.40GHz is 9817, while for a Intel Core i7 995X @ 3.60GHz it's 10945, when the late 2000 / early 2001 Pentium 4 processors like the Pentium 4 1.5 have a passmark rating of around 175 (62 times smaller) - that's very roughly 6 doublings in 10 years, or if you prefer, a doubling every 20 months or so.

6 years and one order of magnitude, if they keep going the same way they've been going since the '50s, we're getting there.

Sickone August 6 2011 2:12 AM EDT

http://en.wikipedia.org/wiki/Intel_Tick-Tock

6 years from now we're scheduled for whatever architecture comes AFTER "Skymont", on 10nm fab, with so-called "3D transistors" (well, they're "sort of" 3D as opposed to nearly "flat", but, eh) which are actually entering production in the 2013 "Haswell" new architecture at 22nm fab, which should be available in 8-core configurations. Considering the 2015 "Skylake" new architecture in 14nm fab will almost certainly start at 8 cores and go up from there (probably up to 12 cores or maybe even some with 16), it's not that unfathomable to have the 2017 processors have up to 16 cores or even more.

http://www.youtube.com/watch?v=iq202LKkeHI

NVIDIA's roadmap is at least as ambitious, targeting a 16-fold increase in processing power in a 6-year span (2007->2013), and almost certainly planning to keep going at roughly the same exponential rate.

Sickone August 6 2011 2:20 AM EDT

Sure, there might be speed bumps along the way, and products might get delayed a few months here and there (the NVIDIA "new tech" cards for instance recently were pushed about half a year later in expected release date due to problems with the fab process, and I bet Intel will also have some teething problems somewhere along the way - however it won't be long until they recover the "lost oomph" when the obstructions do get cleared), but to expect a x10 increase in processing power in 6 years when the industry is TRYING to get a x16 increase in 6 years, that's actually quite conservative.

Duke August 6 2011 9:53 AM EDT

http://www.anandtech.com/bench/Product/142?vs=48


Coreduo QX9770 quad part release date Aug 2008
I7 980X gulftown release date Jul 2010

I think we can agree that is far from 100% performance increase. Under real world appliction performance never scale linear. AS for NV claim i learn that they are full of scrap. Please dont use the term 3D transistor its a marketing buzz. Transistor a whole is 3D since its exist. Use the term tri-gate or 3D gate.

Sickone August 6 2011 1:07 PM EDT

Please dont use the term 3D transistor
That's why I said "so-called" 3D transistors.

9770 vs 980X ; I think we can agree that is far from 100% performance increase.
The passmark ratings of those two processors you listed are 5025 and 10599, which means the actual performance has SLIGHTLY MORE than doubled.
http://www.cpubenchmark.net/cpu_list.php
I prefer larger gaps in time due to "smoothing over" the natural asperities.

AS for NV claim i learn that they are full of scrap.
They claim they *want* a x16-fold increase in GPU processing power in 6 years, maybe they won't manage THAT much, but still, close to one order of magnitude is not that unbelievable.
A more relevant 6-year-gap comparison ?
http://www.videocardbenchmark.net/gpu_list.php
GeForce GTX 480 - March 2010 - rating 3528 ; vs ; GeForce 6800 Ultra Extreme - June 2004 - rating 477
Granted, only a x7.4 raw performance increase (i.e. faster FPS of older reference games), but the graphics quality increased a lot more too (more detailed textures, better shaders, better antialiasing, etc).

Sickone August 6 2011 1:21 PM EDT

As for why those benchmarks you linked don't "show" a doubling in performance - well, most of them DO NOT have the CPU as the actual bottleneck, quite a few of them either have the video card or the HDD as a bottleneck.
In those benchmarks where the CPU/memory usage is almost the only thing that matters, the newer CPU shows it's actually performing more than twice as well.

Duke August 6 2011 3:11 PM EDT

The passmark ratings of those two processors you listed are 5025 and 10599, which means the actual performance has SLIGHTLY MORE than doubled.
http://www.cpubenchmark.net/cpu_list.php
I prefer larger gaps in time due to "smoothing over" the natural asperities.

Reply : I would suggest you bring real world application benchmark and not a synthetic benchmark with score that are really lousy.

Sickone August 7 2011 12:05 AM EDT

I would suggest you bring real world application benchmark and not a synthetic benchmark with score that are really lousy.

It's like you protesting that no, this carburetor is not 125% more efficient than the old one, because the car does not go 125% faster.
That's disingenuous, at best.

There are very few "real world applications" where the CPU is not bound//limited by the performance of another piece of technology inside the PC, a piece of technology that also evolves, but in far less smooth steps.
If you want to make a fair comparison, you'd need to use a "test case" where all the other limiting factors are no longer an issue at all, which could prove extremely difficult for what you call a "real world application".

How exactly would you propose to do that other than a synthetic benchmark ?
The synthetic benchmark that focuses ONLY on CPU power is the most relevant one when assessing pure CPU performance, which is exactly what we are talking about in the first place - the increase in processing power.

Duke August 7 2011 12:44 AM EDT

The test that you are showing is very similar to Sisoft sandra. Where the exe is small enough to run from the L2 cache and there is 0 branch in the code just a long series of independant ALU and FPU instruction and a infinite amount of treads can be spawn and there is zero interaction bettween those.

So if i am to understand what you say this is more usuful that real world application ? Its call a systemes for a reason you know. Getting raw FPU/ALU performance is great but in real life code have dependency and not all type of workload can spawn a large amount of treads.

http://www.tomshardware.com/charts/desktop-cpu-charts-2010/Video-Editing-Adobe-After-Effects-CS5,2427.html

Here a good ex I7 2600K (sandy bridge) is faster that Gulftown while having lower core count and also lower that theorical max output under SSE/ALU/FPU code.

Sickone August 7 2011 1:51 AM EDT

So if i am to understand what you say this is more usuful that real world application ?

I am saying that it's more useful for measuring *raw CPU performance*, which it was DESIGNED to measure.
How you actually use that performance in any "real world applications" is a different story, and a far, far, FAR more complicated one at that.

Sickone August 7 2011 2:18 AM EDT

IF there would be no bloat in the code (which is sadly almost never the case) and IF the software would take full advantage of the available hardware and IF each and every individual component's performance would be scaled up by the same factor, then yeah, you WOULD see a doubling of actual overall performance every 18 months even.
But there's always some degree of bloat in the code (more and more, actually), software takes years to fully take advantage of new hardware ("paradigm shifts" if you will), some components barely change for a good while then all of a sudden jump up radically, and so on and so forth.

Now, if we're talking real world applications, compare the following PC 3D shooter releases:
1993, Doom
1996, Quake
1998, Half-Life ; Unreal
2001, Return to Castle Wolfenstein
2004, Far Cry ; Half-Life 2
2007, Crysis ; BioShock
2010, Metro 2033
Now tell me they're not HUGE improvements, each and every one of them, when ramped up to the max possible with their game engines, and tell me how well do you think each would run with hardware that came out in the "previous cycle".

Duke August 7 2011 2:20 AM EDT

My original point is that performance does not double every 2 year. Well not anymore SRAM cell does not scale that well in the last 3 transistion and I/O part are almost impossible to scale down. Power envelope stay the same or decreasing, intel (ultrabook) is aiming at a 15 watts TPD for mainstream CPU by 2013. The % of transistor that are getting invest into the GPU is increasing. Ivy bridge main improvement will be on AVX and again a 50% wider GPU core and larger core. Most of the increase in transistor will be for the GPU.

Sickone August 7 2011 2:25 AM EDT

My original point is that performance does not double every 2 year

My original point is that raw processing power usually roughly doubles in roughly 2 years.
How you choose to define "performance", and how raw processing power actually affects that particular measure of performance, that's... complicated.

Duke August 7 2011 11:20 AM EDT

doubling of actual overall performance every 18 months even


Doubling performance under the same fab process ? I think ill end this here.

Sickone August 7 2011 12:10 PM EDT

Of course not with the same fab process, of course not with the same architecture, and not even necessarily with the same number of cores.

Duke August 7 2011 2:40 PM EDT

http://www.spec.org/cpu2006/results/res2010q1/cpu2006-20100215-09681.html

vs

http://www.spec.org/cpu2006/results/res2008q4/cpu2006-20081024-05695.html

Sickone August 7 2011 5:30 PM EDT

When a 33% core count increase and a 4% per-core clock speed increase (IGNORING any technology differences or the fact the higher one also can auto-boost in speed upwards another few good percents) only results in a 22%/28% increase in geometric mean of scores, you just know that you're NOT looking at anything that is directly proportional just to raw processing power.

Duke August 8 2011 12:15 AM EDT

A core count increase have never give linear increase in performance and never will even under server workload.

http://www.cse.wustl.edu/ANCS/2007/slides/Bryan%20Veal%20ANCS%20Presentation.pdf

Not as relevant since the introduction of AMD opteron and Intel nephalem but its have create new issue.

http://halobates.de/lk09-scalability.pdf

http://cache-www.intel.com/cd/00/00/36/03/360363_360363.pdf

From intel own white paper Moores law is about doubling the transistor amount and not the performance. Ill even argue that we are no longer in the era of doubling transistor count but to improve the performance per watts. As for FAB process they are not on a 18 month pace but a 24 month pace and that ONLY for intel the others are falling back.

Duke August 8 2011 12:17 AM EDT

Commonly cited as モProcessor speeds double every
eighteen months,ヤ Mooreメs Law actually states that the
doubling effect is relative to the number of transistors
that can be put into a chip. Practically speaking, for
years this meant that processors did, in fact, double in
speeds every eighteen months or so, leading developers
and IT environments into a beneficial arrangement. If
the developed program wasnメt fast enough, all that was
necessary to improve the speed was patience and a
new system hardware upgrade. A year and a half later,
without any work whatsoever, the programメs processing
power doubled. With even some modest performance
tuning work on the part of the developer, programs could
appear to double, triple or even quadruple in speed.
But by 2003, that doubling effect started to wane, and by
2004, it came to a very visible halt.

Sickone August 8 2011 4:11 AM EDT

A core count increase have never give linear increase in performance and never will even under server workload.

Good job quote-mining rather old whitepapers and interpreting things ever so slightly out of context. Is the scaling always perfect ? Of course not. But in the right circumstances, the scaling CAN become NEARLY perfect.
You have to design your software around the hardware to take full advantage of it. It's trivially obvious that older applications that were NOT designed with massive multithreading in mind will NOT fully benefit from an increased core count. As they say, "duuh!"
But here's the thing : coders HAVE to adapt. Software WILL become more and more efficient at using multiple cores. Yes, there will still be losses, but they'll also become smaller and smaller.

From intel own white paper Moores law is about doubling the transistor amount and not the performance. Ill even argue that we are no longer in the era of doubling transistor count but to improve the performance per watts.

Oh boy. Two things. No, make that sort of three.

One, as I stated much earlier, Moore's law is not so much a law anymore as it is a guideline for manufacturers and partially self-fulfilling prophecy.

Two (and sort of three), doubling the transistor count per same chip area DOES lead to a near-doubling of performance. The very same thing ALSO leads to increasing the performance per Watt AND to an increase in maximum frequency. Not quite doubling because you start having quantum tunneling effect energy losses which lead to decreased transistor efficiency as you scale down on the fab process. The new transistor design from Intel helps that even further by shifting the performance per watt into a better ratio before miniaturization, mitigating the microscale inherent efficiency losses.

And again, Moore's law IS NOT A LAW. It's a general trend observation, which has spurred manufacturers to set future goals. While they may not always make it in time, and it may slow down every time another technical hurdle appears, it gets much better much faster as soon as that hurdle is passed, and on a LONG ENOUGH timescale, Moore's "law" of exponential growth, while not quite perfectly still adequately describes performance increases.

Sickone August 8 2011 4:23 AM EDT

Also, I marvel at the audacity of quote-mining "against Moore's Law" (so to speak) from a whitepaper that has the main focus of scalability on multiple cores, a paper that actually concludes that with proper software-hardware coordination scalability up to 8 cores has been shown to be possible to be made practically as good as linear.

Marlfox [Cult of the Valaraukar] August 8 2011 8:37 PM EDT

And the Nerdiest Argument of the Year award goes to...
This thread is closed to new posts. However, you are welcome to reference it from a new thread; link this with the html <a href="/bboard/q-and-a-fetch-msg.tcl?msg_id=003Cu2">Real or Fake?</a>