shivoa

Phaedrus' Street Crew
  • Content count

    4
  • Joined

  • Last visited

Everything posted by shivoa

  1. I just had to Google for pictures of US keyboards to realise this wasn't some in-joke I wasn't getting. Our keyboards ship ¬ enabled (just a shift-modifier away) thanks to the extra key between the shift and z keys. https://en.wikipedia.org/wiki/British_and_American_keyboards ¬ ¬
  2. On home streaming and mobile devices: it's not Steam IHS or iOS but for anyone with an nVidia card and Android portable then there's the hack of the GameStream technology* called LimeLight: https://play.google.com/store/apps/details?id=com.limelight It seems pretty good with reporting of the decoder lag etc as well as options for 30/60fps and 720p or 1080p (and bandwidth cap). I rendered some of The International 4 on my gaming PC but watched it from another room via my Nexus 7 (being able to take control to look at data feeds / customise the UI to my preferred skin / inspect spell tooltips at will means this was much preferable to loading up the official streams and my locally rendered 1080p30 stream at >10mbps looked quite a lot better than normal internet video) but other than that have only used it for a small amount of gaming. I think the Steam IHS is a bit better than nVidia's GameStream from what I've used of the two (slightly less lag, doesn't drop frames as often), but then I'm running the nVidia application anyway to get access to ShadowPlay ('GameDVR' functionality) and Steam's Android client has zero streaming options while this 3rd party app hooks in perfectly to the nVidia service and even works with a wide range of controllers (wired & bluetooth). I'm also usually seeing the GameStream at it's worst (my guess is the wifi in a tablet isn't great and Steam IHS I use on a wired connection to my 2nd PC or one the wifi to my laptop so that may account for the difference - LimeLight certainly thinks there's 40ms+ of extra lag from the video decode on my tablet end of the equation) so maybe it's not fair to say Steam's implementation is better. I guess more choice is good so it's nice to have options. I believe AMD have their own Gaming Evolved streaming but have not seen any Android apps that can hook into it. * http://shield.nvidia.com/play-pc-games/ (No nVidia branded Android device required)
  3. Listening to Steam time played being reported as lifespan eaten values, I'm wondering how much time Chris has actually spent in these games. We all know Steam's time played records are rather iffy (while idle time when you walked away from the game to make some tea or the like will boost it, Steam has also been rather good at failing to track additions via momentary connectivity loses to the Steam servers and other bugs) and how long ago did Steam start tracking the time played? Was Zuma Deluxe out on the platform quite a while (2006 if Steam's store page is accurate) before they started tracking that data, when did Chris start playing? Is this GPGPU (compute) stuff (GPUs obviously accelerate rendering, they are the things which DX or OGL passes their data to to do all the polygon render work, but now they are also starting to get used for the data pass before the drawing triangles stage beyond the original idea of shaders as small code fragements to run during the render stage)? The way GPU acceleration typically works is that the inner loop is pushed to the GPU to operate on in a massively parallel way. Your CPU has good performance for general code thanks to branch prediction, out of order flips, and several other tweaks to a classic processor (required due to the deep pipeline, knowing the result of an if statement may take quite some time so you have to get good at guessing which branch to start executing down and if you got it wrong you have to flush and go back so generating heat for no real work done), while the GPU is a far dumber beast but with so many processing units that it can get the job done of running the millions of code fragments to render each frame in reasonable time. For this reason the use of GPU compute has traditionally been limited to looking at a loop (which gives obvious parallelization if each execution isn't dependent on the previous iteration, say for munching a large array to repeatedly calculate the new position or something) and just making a kernel out of the inner loop to throw at the GPU, avoid branches and go for fragments as small as possible. It makes sense as that is the traditional work that a GPU has been tuned to perform well at. Why I think you're talking about this specific element is that you bring up ATi (AMD). A year ago nVidia announced Kepler and the introduction of Dynamic Parallelism to CUDA 5.0 which allows the GPU to spawn off new instances without going back to the CPU to manage the deployment of work. Kernels can create child kernels without the CPU being involved and some of their numbers show that certain tasks were being constrained by the CPU needing to manage it all in previous hardware/software. As far as I'm aware AMD have not responded with a similar tech for reducing the CPU load on compute tasks for their architecture. This sounds like it is what you're describing (please correct me if I've gotten the wrong end of the stick, I didn't listen to the podcast you reference) and searching for nVidia Dynamic Parallelism should provide some decent background reading if you're looking for more detailed information.
  4. I'd just like to join the above listeners in saying that the reminder of tucows existence (and what their acronym stands for) was completely effective in getting me to make sure Hover.com/wizard was my first choice for domain registration. Idle Thumbs: 100% effective sponsorship.