b0VIM 7.47DQVhŘ&archon810nexus2/srv/www/htdocs/androidpolice.com/wp-content/plugins/wptouch-pro/include/extensions/custom-post-types.php 3210#"! U Qualcomm Announces New 2.5Ghz Snapdragon 805 CPU And 40% More Powerful Adreno 420 GPU, Coming In 2014

Time, like an ever-rolling stream, bears all outdated hardware away. Qualcomm is pretty eager to top itself when it comes to ARM architectures, and to that end has announced its latest high-end CPU and GPU chips set to fill future smartphones and tablets. The Snapdragon 805 CPU and the new Adreno 420 GPU will be ready for mass-produced devices in the early half of 2014.

So what has Qualcomm done to make this new system-on-a-chip shine? In the CPU front, the maximum clock speed per core has been bumped to an even speedier 2.5GHz, up from 2.3GHz on the current 800. Memory bandwidth has been doubled at 28.6GB per second, which should provide an even better improvement for everyday apps than raw speed. The Snapdragon 805 will integrate with the current Gobi MDM9x25 LTE modem or the newly-announced MDM9x35, which uses a 20 nanometer technology for better download speeds, lower power consumption, and smaller size. Gpixel/s imaging throughput should make the overpowered cameras of current high-end phones even more impressive.

But perhaps the most dramatic announcement is the Adreno 420 GPU. Combined with the new CPU, the new architecture is designed to deliver graphics and video at up to 4K resolution. There's also support for H.265 decoding, an ultra-quality standard that's just emerging (and probably won't be relevant for Android devices for a while). Between enhanced software support, high-density rendering, hardware tessellation, geometry shaders, and other things that make PC gamers drool, Qualcomm is claiming a 40% increase in performance over the Adreno 330.

It's a pretty safe bet that HTC, LG, ASUS, and (in some cases) Samsung will have hardware featuring the new CPU and GPU some time next year. Announcements, if not actual demonstrations, at Mobile World Congress would just about fit. You can dig into the technical details in the press releases below.

NEW YORK, Nov. 20, 2013 /PRNewswire/ -- Qualcomm Incorporated (NASDAQ: QCOM) today announced that its subsidiary, Qualcomm Technologies, Inc., introduced the next generation mobile processor of the Qualcomm® Snapdragon™ 800 tier, the QualcommSnapdragon 805 processor, which is designed to deliver the highest-quality mobile video, imaging and graphics experiences at Ultra HD (4K) resolution, both on device and via Ultra HD TVs. Featuring the new Adreno 420 GPU, with up to 40 percent more graphics processing power than its predecessor, the Snapdragon 805 processor is the first mobile processor to offer system-level Ultra HD support, 4K video capture and playback and enhanced dual camera Image Signal Processors (ISPs), for superior performance, multitasking, power efficiency and mobile user experiences.

The Snapdragon 805 processor is Qualcomm Technologies' newest and highest performing Snapdragon processor to date, featuring:

  • Blazing fast apps and web browsing and outstanding performance: Krait 450 quad-core CPU, the first mobile CPU to run at speeds of up to 2.5 GHz per core, plus superior memory bandwidth support of up to 25.6 GB/second that is designed to provide unprecedented multimedia and web browsing performance.
  • Smooth, sharp user interface and games support Ultra HD resolution: The mobile industry's first end-to-end Ultra HD solution with on-device display concurrent with output to HDTV; features Qualcomm Technologies' new Adreno 420 GPU, which introduces support for hardware tessellation and geometry shaders, for advanced 4K rendering, with even more realistic scenes and objects, visually stunning user interface, graphics and mobile gaming experiences at lower power.
  • Fast, seamless connected mobile experiences: Custom, efficient integration with either the Qualcomm® Gobi MDM9x25 or the Gobi MDM9x35 modem, powering superior seamless connected mobile experiences. The Gobi MDM9x25 chipset announced in February 2013 has seen significant adoption as the first embedded, mobile computing solution to support LTE carrier aggregation and LTE Category 4 with superior peak data rates of up to 150Mbps. Additionally, Qualcomm's most advanced Wi-Fi for mobile, 2-stream dual-band Qualcomm® VIVE™ 802.11ac, enables wireless 4K video streaming and other media-intensive applications. With a low-power PCIe interface to the QCA6174, tablets and high-end smartphones can take advantage of faster mobile Wi-Fi performance (over 600 Mbps), extended operating range and concurrent Bluetooth connections, with minimal impact on battery life.
  • Ability to stream more video content at higher quality using less power: Support for Hollywood Quality Video (HQV) for video post processing, first to introduce hardware 4K HEVC (H.265) decode for mobile for extremely low-power HD video playback.
  • Sharper, higher resolution photos in low light and advanced post-processing features: First Gpixel/s throughput camera support in a mobile processor designed for a significant increase in camera speed and imaging quality. Sensor processing with gyro integration enables image stabilization for sharper, crisper photos. Qualcomm Technologies is the first to announce a mobile processor with advanced, low-power, integrated sensor processing, enabled by its custom DSP, designed to deliver a wide range of sensor-enabled mobile experiences.

The Snapdragon 805 processor is sampling now and expected to be available in commercial devices by the first half of 2014.

NEW YORK, Nov. 20, 2013 /PRNewswire/ -- Qualcomm Incorporated (NASDAQ: QCOM) today announced that its subsidiary, QualcommTechnologies, Inc., introduced its fourth-generation 3G/LTE multimode solutions with the newest modem chipset, the Qualcomm® Gobi™ 9x35, and RF transceiver chip, the Qualcomm WTR3925, designed for industry-leading 4G LTE Advanced mobile broadband connectivity. Both products are fourth-generation 3G/LTE multimode solutions from Qualcomm Technologies and offer significant improvements in performance, power consumption and printed circuit board area requirements.

The Qualcomm Gobi 9x35 is the first announced cellular modem based on the 20 nm technology node with support for global carrier aggregation deployments up to 40 MHz for both LTE TDD and FDD Category 6 with download speeds of up to 300 Mbps. The Gobi 9x35 is backwards compatible and supports all other major cellular technologies, including DC-HSPA, EVDO Rev. B, CDMA 1x, GSM and TD-SCDMA. The Qualcomm WTR3925 is the first announced RF transceiver chip based on the 28 nm process, and is QualcommTechnologies' first single-chip, carrier aggregation RF solution that supports all carrier aggregation band combinations approved by 3GPP. The WTR3925 pairs with the Gobi 9x35 chipset and the Qualcomm RF360™ Front End Solution, which enable the mobile industry's premier global, single-SKU LTE platform.

The Gobi 9x35 and WTR3925 are specifically designed to use less power and occupy less printed circuit board area and continue the trend towards tighter integration, smaller size and increased performance. The 40 MHz carrier aggregation capability of the Gobi 9x35, coupled with the comprehensive carrier aggregation band support of the WTR3925, is engineered to allow network operators to combine their fragmented spectrum in all possible 3GPP-approved combinations of 5 MHz, 10 MHz, 15 MHz, and 20 MHz bandwidths to increase capacity and service more subscribers with an improved end-user experience. The WTR3925 also incorporates the Qualcomm IZat™ location platform designed for delivery of seamless, global location.

For OEMs, the combination of the Gobi 9x35 modem and WTR3925 chipset enables a powerful, single platform that can be used to launch LTE Advanced devices faster at a global scale. Together, these solutions are designed to deliver up to 2X faster LTE Advanced, CAT 6 up to 300 Mbps, along with dual carrier HSUPA and dual band multi-carrier HSPA+.

It is anticipated that Qualcomm Gobi 9x35 and WTR3925 will begin sampling to customers early next year.

Jeremiah Rice
Jeremiah is a US-based blogger who bought a Nexus One the day it came out and never looked back. In his spare time he watches Star Trek, cooks eggs, and completely fails to write novels.
  • Dale

    GS5? If so, I think I can hold on to my Gnex for a little while longer...

    • Razo_E

      Yeah, by the time I pay off my GS4, it'll be time for the GS5!

    • PhoenixPath

      I'd love to see this in the M8 (HTC One 2?)...

  • http://www.whatupgoingon.com Greg Macek

    Queue every phone that has a Snapdragon 800 in it be called "outdated" and "old tech" in the next few weeks.

    "How could [VENDOR] not include the latest and greatest CPU/GPU? The chipset was just announced."
    "I could totally do more with the 805 in my phone. This sucks."

    • EH101

      Technically it won't be outdated or old tech until this new chip actually hits the market... but it will still be bad ass outdated and old tech.

      To me, it's like having a Core i7 processor in a PC; yeah a new one is released every year(usually), but the newly old one still probably kicks the crap out of everything AMD makes and every non-i7 and non-Xenon processor Intel makes.

      • UtopiaNH

        Uh... i7s don't beat i5s by all that much compared to the massive increase in cost.

        If you don't overclock:

        If you overclock:

        Performance on these is pretty insane for the money, and paying for the i7 is marginal increases at best.

        • EH101

          I know, I have a non-overclocked i5 myself in my gaming rig. But the fact remains that i7's are more powerful, even if by a small margin and are generally more powerful than the next generation i5's as well.

          Check out the Haswell vs Ivy Bridge comparison at anandtech if you want, but in summary, the Ivy Bridge i7 outperformed the Haswell i5 in almost every area.

          • UtopiaNH

            Yes, they do, but that's because Sandy Bridge was an enormous leap in power, while Ivy Bridge and Haswell have been hyperfocused on power consumption. Its not a standard situation, if there's another big jump in architecture due up in the future, the newer i5s will significantly out perform older i7s, as with the Sandy Bridge generation.

          • GraveUypo

            both intel and AMD are focusing on improving their IGPs as opposed to the cpus themselves. i expect the stagnancy to continue for a while.
            it's fun paying more for something i won't even use. i wish they made a cheaper cpu without the stupid IGP.

          • EH101

            This. I understand IGP's are useful in laptops and such, but I'm sure most don't need them for desktop use. A good compromise would be to just leave the IGP's as they are and continue improving the CPU side. Current IGP's are more than good enough for basic desktops non-gamers (and non-video design etc) use imo.

          • EH101

            You do realize the Clarkdale i7 EE beat the Sandy Bridge i5 in a lot of benchmarks right? (It, the i7, also got beat badly in others; must've been some interesting tradeoffs in that design iteration) There's no reason to believe that won't be the case when the next big leap occurs, unless they completely scrap the current Core architecture and come out with something amazing/different.

        • GraveUypo

          yep i use an now-old i5 2500k at 4.6ghz and it still beats pretty much every stock-clocked cpu out there in close to all tasks (in some rare cases like 3D rendering it gets beaten by 6+ core cpus... from intel)

          • EH101

            Just curious, but how high were you able to OC before settling on 4.6? Or did you get that high and just say "Good enough"?

          • GraveUypo

            the latter. in fact i had stopped at 4.2ghz, then my friend said "only 4.2ghz? mine goes 4.6 no sweat" and then i tried that and voila, here i am still. no temperature issues (i've got a pretty big cooler) and certainly more performance than i could use for most tasks (mental ray rendering being the sole exception. i could use a couple HUNDRED more gigahertz there).

          • EH101

            Awesome. Glad you were able to get it nice and stable. I miss OCing sometimes but I forced myself to buy a non-k chip this cycle so I wouldn't "waste" my time messing with my computer lol. Maybe in 3 years, when I plan to upgrade again, I'll be able to get back in the game. (though I'm not sure the ridiculously powerful CPU's of tomorrow would even make it worthwhile. .. hmm)

          • GraveUypo

            i know how you feel. i'm not an enthusiast overclocker (not anymore anyways, i was ten years ago), so i just find the sweet spot and stay there. though i will ALWAYS overclock when given the chance. it saves too much money to pass up.

    • Sean Lumly

      Even with "40% more performance," existing Snapdragon 800 devices should fair quite well with the same games. You won't get optimal performance, but the game should still be entirely playable.

      • http://twitter.com/anishbhalerao Anish Bhalerao

        Dude. Sarcasm.

    • Adrian Meredith

      for me the snapdragon 800 is really the minimum for running at 1080p so this one should be the first real gpu to be practical at the crazy nexus 10 resolution. If only the new nexus 10 could used this... not likely though

  • deltatux

    I'm just waiting for Cortex A53/A57 based designs instead, yes this is a great stepping stone but in order to really blow me away, there needs to be more than just a CPU speed bump. The new GPU looks pretty exciting though, can't wait to see how it really performs. My Nexus 4 is still a powerhouse so we'll see how Nexus 5 (2014) will do or maybe I'll wait until Nexus 5 (2015) before getting a new phone.

    • EH101

      Exactly why I'm content with waiting for the Note 4. (I have ad little bigger than average hands, suits me quite well) The Note 3 is nice an all, but even with its quirks my Note 2 is still more than enough to hold me over for another year.

    • porter86

      You think your Nexus 4 is a powerhouse. Cute.

  • Kyle Riedemann

    I hope this isn't just used to make flagships with 4K screens. I'm more than fine with 1080p and an amazing performance increase, I really don't want a 4K screen and marginally better performance.

    • http://www.flickr.com/photos/77537273@N03/ Herman

      Heck, I'd rather take the battery life of a 720p display on a non-phablet sized device than 1080p.

      • Derik Taylor

        Yes. Just yes. On a mobile device, I am more about productivity, and I think that having high battery life and performance is better for that than having the highest resolution possible.

    • Sean Lumly

      Games can always render at lower resolutions (if developers support this idea). A couple of games show this capability off with good effect: Anomaly 2, Bladeslinger, Riptide GP, Beach Buggy Blitz (off of the top of my head). This number should increase in the future. In short, you get ultra-high res for viewing text/media/etc, and still get great game performance.

      • Kyle Riedemann

        I don't care about game performance, I have a Nexus 10 that falls short of 4K, and it struggles with some basic UI stuff and reserves half the RAM to feed the GPU. I just don't trust Samsung or HTC to handle that stuff efficiently. The overhead just isn't worth it, and nobody needs 4K on a phone.I get 4K recording, and 4K output, but a 4K display is just stupid until we get new battery tech at least.

        • Sean Lumly

          I have a Nexus 10 as well that I use daily (it's actually currently being used right beside my laptop, and it runs the UI very well). Perhaps the apps that you're using are slow?

          It would be bad business for screen developers to stop at 1080p for smartphones. VR is a very real -- and if the current enthusiasm is any indication -- will be very popular. Consumer VR (Eg. Rift, dive, vrAse, etc) also rely on mobile hardware. It is important that screen manufacturers can produce the technology at scale to meet the demands of this market. This means mass production.

  • vwbeetlvr

    Aaaaaand my Galaxy Note 3 Is now obsolete

    • http://www.flickr.com/photos/77537273@N03/ Herman

      I know, right? :(
      Here, let me recycle it for you. If you could just send it over to me..

    • Chris

      My Note 2 isn't even obsolete...

  • Stefan Eckhardt

    With a regular $10 magnifying glass from ebay you can almost see the individual pixels on a 5 inch 1080p display if you look real close. Clearly it's time for 4K displays in your pocket.

    • Sean Lumly

      The market is demanding these screens. Think about the impending VR craze, where HMDs (head mounted displays) exclusively use mobile hardware for key components.

      • B3nlok

        Sorry but no. I dont need 4k on 5" screen devices. So I can get stuck with <20 fps in most 3d games? So i have to charge my device 2x-3x a day because the battery cant handle the screen power consumption? Its overkill and waste of technology. Priorities are out of whack on mobile space and this is the perfect example.

        • Sean Lumly

          Your mistake is thinking that I'm talking about you. I'm not. I'm talking about the larger market, and impending demand of a new sector.

          And this may be news to you, but just because a display is 4K, that doesn't mean games must: games can render at lower resolutions and games on the market are increasingly doing this.

          Increasing screen resolution was the major complaint of last year's 1080p. Faster clocks was also a complaint. Yet battery life remains similar and in some cases has improved. Why would this be different? Qualcomm is in the market to sell SoCs, which means that power consumption is the TOP priority. Period.

          • NinoBr0wn

            So you were being serious when you said the market is demanding 4K screens?

          • Sean Lumly

            Yes. Not 4K specifically, but there seems to be a growing demand for higher resolutions.

            And who knows? Higher resolutions may enable new types of uses beyond smartphones and VR.

          • Chris Seward

            The higher resolutions aren't necessarily for the phone display itself. I agree there is little reason for a 4K 5" display. But it would be nice to be able to output at native resolution to an external monitor or 4K TV over HDMI, which this video chip would support.

          • NinoBr0wn

            So you were being serious when you said the market is demanding 4K screens?

        • Michael Ta

          true. I'd rather have a battery that last a whole freaking week and a PPI that's around ~300 or a bit more.

      • Stefan Eckhardt

        Yes, something like Oculus Rift could use a 4K display. But there is no need to include mobile hardware capable of rendering 3D graphics in that resolution, as the pictures come from the PC hardware.

        The mobile hardware only would have to do simple stuff like menus and displaying a camera image so you don't have to take off the HMD for a quick look into the real world, but that doesn't have to be in 4K either.

        • Sean Lumly

          Having seen projects like Durovis Dive and vrAse, I can honestly say that I'm glad that smartphones are still going up in resolution. This is a niche use, but I am still happy for it.

          It also makes for an easy way to do screen mirroring on an external screen that are increasingly at these resolutions. Since smartphones are getting so powerful, they may very well become the headless laptop in your pocket! Certainly Samsung's Chromebook has shown us that a laptop can work with a mobile chipset, albeit a bit slowly, and mobile CPUs have already moved past it. It also makes a handy tool for consuming media on a large screen (eg. Movies).

          When using a phone strictly as a phone, I would imagine that the benefits of such a high resolution would be negligible in every day use. Of course, with an eyes-on comparison, it may prove to offer a clarity that is slight but preferable.

          • Stefan Eckhardt

            I wasn't saying that there is no need for more powerful chipsets, but the message in the press release text is clearly hinting at upcoming 4K smartphones, which I still find ridiculous.

            I agree that the future lies in one device to carry around and at home just put it on the charging mat and connect wirelessly to a monitor, keyboard and mouse as your PC, or to fill the big screen. But we are not there yet, on many details. Mobile chipsets (even Snapdragon 805) are not powerful enough yet, storage is too small and Android is very far from ideal for a desktop environment. It will take at least another 5 years before it will all come together, consumer-friendly, I guess.

          • Sean Lumly

            "I wasn't saying that there is no need for more powerful chipsets, but the message in the press release text is clearly hinting at upcoming 4K smartphones, which I still find ridiculous."

            Nor was I.. Perhaps you need to re-read the post?

      • Roland Golden Bay


  • http://randomphantasmagoria.com/ Shawn

    And 40% more battery drain to go with it, I bet. Oh, and did I mention that mobile tech moves way too fast?

    • Sean Lumly

      This performance increase is likely an increase in the shader pipelines on die (I would bet the GPU arch is largely the same). This will be made possible by the 14nm (or 16nm) process that this SoC is most likely targeting, which means smaller transistors and lower power. With the same degree of utilization, I would expect this SoC should perform similar to its predecessor.

      The Krait 450 is interesting. Since there are no increases in clocks or cores, it implies that we will be seeing new architecture. In mobile, it means that things are more efficient than they were last year (and likely larger on die).

      • renz

        will qualcomm fab this at samsung? heard that the next exynos will be 14nm. this SoC expected to come out in first half of next year so i'm expecting this one still on TSMC 28nm node

        • Sean Lumly

          I'm not sure where Qualcomm will fab.. But yes Samsung will have 14nm process capability next year, to match Intel's 14nm process. Thus Intel loses its process lead and Samsung's manufacturing looks much more attractive. I recall reading that TSMC will be at 16nm, which is great, but still at a disadvantage compared to Samsung's process. This should bode well for Samsung's business. I just hope they can handle the impending demand!

          There has been talk that Intel will start to produce other IP in its factories, and I can imagine that large companies like Qualcomm (and possibly Apple) may be customers if the price is right.

          If the next major Snapdragon is at 28nm, Qualcomm will be at an extreme disadvantage next year. They don't strike me as a company that would allow such a move (if they could possibly help it). Interestingly, they somewhat held their LTE modem tech over the competition's head and used pricing to make their SoCs more affordable (AFAIK). I could see Samsung pulling a similar stunt with almost exclusive control of a 14nm fab, making the latest Exynos a *very* attractive SoC for the money.

          That said, I don't think that the 805 is the only chip that Qualcomm will have for next year. I'm guessing that they will have a follow-up chip to be released later in the year. This would be similar to the 600 and 800 of 2013. This is purely wild speculation! :P

          • renz

            TSMC 16nm will be FinFet but idk if samsung 14nm will be planar or Finfet as well. intel will open it's fab for another company but i don't think they will open it up to their direct competitor such as qualcomm. right now intel still trying to penetrate the mobile market with their new atom SoC so opening their fab and give thier process advantage to other player like qualcomm will not advantageous to their Atom SoC future. most likely they will be careful selecting which company to use their fab so there will be no direct competition to them.

            for qualcomm chip i honestly believe they will have something better than 805 next year.

          • Sean Lumly

            I also think that this isn't Qualcomm's only play for 2014.

            I recall that Samsung would be at Finfet for 14nm. I don't know much about fab, but I seem to recall reading that Finfet was the structure needed to move to 14nm in the first place.

    • BetterWithRoot

      "Oh, and did I mention that mobile tech moves way too fast?"

      I know right, someone should make a law about that, but do we need Moore?

  • Matt

    64 bit processing anyone?

    • http://www.flickr.com/photos/77537273@N03/ Herman

      The question is:


      • werw

        graphics performance

        • UtopiaNH

          graphics performance is not affected by 64 bit. It's the architecture of ArmV8 thats important more than it being 32 or 64 bit.

      • Matt Posey

        To process longer instructions in fewer clock cycles? It's about increasing computing efficiency. 64-bit processing isn't ONLY about expanding the capacities of the address bus beyond 4GB.

  • http://nikolaovcharski.com/ Nikola Ovcharski

    A beast!

  • http://youtube.com/user/CurelessSyn CurelessSynergy

    Incredible. Give it another month before it's obsolete again.

    • Himmat Singh

      This. Makes it such a painful task to buy a phone/tab knowing it's gonna be obsolete not long after it's in your hands. :(

  • Stylus_XL

    It's kind of a shame this won't make it to the next Nexus 10.

    • Himmat Singh

      Why not!?

  • Sean Lumly

    While the performance of the Adreno 420 GPU is exciting, the doubling of memory bandwidth is the real show-stealer here (IMHO) -- and presumably this will also be true for competing SoCs. For the first time, mobile GPUs will not only meet the computational muscle of last-gen consoles, they will also feature more memory bandwidth (in some cases). This is likely due to the fashionable trend of stacking memory directly on top of the SoC for a wider bus and lower power.

    But it doesn't end there. Mobile GPUs are hell-bent of efficiency to maintain modest power draw under very constrained conditions. This has led to a lot of innovation to mitigate costly memory transfers. In general hardware tile based rendering, deferred rendering, framebuffer compression, transaction elimination, aggressive texture compression, caching strategies, and the appropriate APIs should mean that the healthy 25GB bandwidth should go much farther than that of last-gen consoles.

    And mobiles have other advantages. One of which is a *much* more memory in a unified pool. The additional memory sits at a whopping 3GB today (compared to the 0.5GB in the Xbox 360) and is useful for storing higher-resolution assets (ie. 2D/3D textures, meshes), and provides the ability to alleviate (in some cases) computation (eg. LUT). The unified memory pool opens up a new world of using the CPU much more intimately with rendering, though the GLES3 API lacks compute functionality (IIRC). Tools like Renderscript/Filterscript can also put those Krait 450 cores to good use! I'm looking forward to seeing how compute will creatively be used in game development.

    Sure, smartphone resolutions continue to go up, with Samsung readying 2.5K (2560x1600) displays for consumer smartphones. However, it's becoming slowly more common for game developers to render at a sub-native resolutions (eg. 1080p or 720p) or at least provide the options to do so. This is a best-of-both-worlds scenario: the high resolutions for text, photos, and surfing, without the increased rendering costs of running such a high display. Here's hoping that sub-native-resolution rendering becomes a popular implementation by mobile devs.

    In short, if it can be played on a PS3/Xbox 360, the mobile phones of next year should be able to handle a port of the same title with similar levels of graphical fidelity. Expect an increasing trend of cross-platform ports arriving for the lucrative mobile market. And with the possibility of more $99 consoles, expect mobile devices to continue to compete with (and potentially increasingly displace) static consoles, with graphics that are "good enough" for many and prices that cannot be ignored.

    And the following year will mean faster mobiles still.

    • mikegonzalez2k

      Actually there is proof of this. Nvidia's Tegra 5 has been compared to the RSX (The GPU inside the PS3) It actually can perform better than the PS3. So I would expect similar results from Qualcomm's next gen GPU.

      It seems in the not so distant future, the trend will be: release a new console, then a few years after release a mobile GPU that can run as powerful as that console.


      • Sean Lumly

        Exactly! In fact, with many modern engines supporting 'build-for-mobile' functionality (eg. Unity, Unreal Engine, Frostbite, Havok), I wouldn't at all be surprised to see simultaneous (or near simultaneous) releases for mobile, console, and desktop.

        EA has recently confirmed via the NY Times that Battlefield 4 will be coming to "high-end mobile" (http://goo.gl/cWh0Hf)! That is exciting stuff. If this is a trending focus (and there is reason to believe that it is), future games may be developed with this outcome in mind and include mobile versions from the beginning.

        I hope for a few things:
        1) That these ported games feature quality settings (low/med/high/ultra). This is likely to happen as they are already appearing in Play store games, and this is a carry-over from PC. And:
        2) That Android evolves from being exclusively driven by mobile components; it should be possible to have an Android console with a desktop-grade graphics card or desktop APU.

        Such a development would ensure that Android compete on all fronts with the current leaders of the gaming market.

        • renz

          it is interesting to see what kind of performance all this device can bring to the mobile but IMO the first and foremost is we need more quality games on android and not more freemium type.

          • Sean Lumly

            Agreed! It's unpopular to say this, but I feel that mobile games need to move from their $5 to $10 price range to attract large development studios that can release games for $50 on consoles and PC. It may also work to have some type of cross promotion that discounts multiple versions of the same game.

            For example: Pay $50 for a console game, and get the mobile version for $5. Or vice-versa.

            But yes, mobile games need to be of a higher quality to content with consoles and PCs. The graphics won't be as good, but the gameplay/story/acting/ can be very similar, and the game just as fun.

      • Stefan Eckhardt

        NVidia is always quick to produce bold statements. Back then they said the Tegra 1 would outperform PS2 while it really took at least Tegra 3 to do so. And Tegra 1 turned out to be so abysmal, it hardly hit the market at all.

        Marketing is at least as important to NVidia as research. They put money into exclusive deals to remove graphical features from mobile games when they run on competitor's chipsets. Even while some of those were superior, but the PR made it look the other way around to the uneducated.

        • renz

          the only device with the original tegra that i know of is MS Zune.

        • mikegonzalez2k

          Normally I would agree, but Tegra5 is more than simply a marketing stunt. It actually has significantly improved from it's previous iteration. The reason is because they were able to finally take Kepler, the architecture used in their supercomputer graphics card Titan, and reduce it's operation down to mobile capacity.

          In doing so they are able to run next to the same Level Of Detail without a significant drop in performance. This has opened the way for a new line of mobile GPU's that will function nearly as good as their PC counterparts.

          So yes, they WILL be able to run better performance than the PS3.
          They have actually already ran Battlefield 3 (the PC version) from a tablet. That was while they were in prototype phase. Over the past year they've been optimizing even further so it is very likely that what they say it can do is actually what it is capable of.

          Qualcomm knows this, and that is why this iteration of their latest chip has focus so heavily on increasing GPU. They want to have something to stand firmly against Tegra.

          • Stefan Eckhardt

            I would welcome it if you are right, but until it's tested in the open world outside of Nvidia's control I remain skeptical.

          • Sean Lumly

            Kepler on mobile isn't particularly special. Nvidia has taken a single (or few) Kepler cores, and arranged them on an SoC destined for mobile -- the Tegra 5. Desktops chipsets have many more of these Kepler cores, and hence require more power. This is the same strategy that nearly all modern mobile GPUs architects employ: scaling the number of GPU cores to the specification of the chip to be manufactured.

            The typical mobile GPU ~20mm^2 GPU will still be constrained by power, and in no way be able to perform as well as a 550mm^2 behemoth desktop chip at high voltages and with active cooling. To imply any differently is false advertising and/or brand confusion.

            That said, mobile Kepler will likely perform more-or-less the same as next generation GPUs released around the same time. Qualcomm's move is consistent with their past performance increases and par for the course in the industry based on the capacity of batteries, and die process. This is in no way to rush something to market to contend with Nvidia. Realistically, GPU designs are completed around 3-years before they actually are manufactured in consumer-ready chips (source: ARM).

            The most exciting part about this (IMHO) is that it will finally allow us to see just how mobile GPU cores perform (perf/mm2 and per/Wh) compared to modern desktop cores. Considering that mobile has been dead-set on performing well under constrained power conditions, I would take a wild bet that mobile GPU cores (eg. Adreno, Mali, PowerVR, Vivante, etc) on a whole will perform better than Kepler (per core). I could be dead wrong on this, but the playing field will be even so the chip logic and die-size will be the performance differentiator.

    • Adrian Meredith

      the only problem with comparing this to consoles is that many console games dont even render in 720p whereas this will be running at 1080p at a minimum

      • Sean Lumly

        Games do not have to render at native res, and can render to lower res and upscale for the performance benefits it brings. For example, the recent Anomaly 2, renders at 720p and upscales (IIRC), and there are other examples of this (eg. Bladeslinger). As resolutions continue to go up, I feel this will become a popular strategy to keep things running at reasonable speeds.

  • Simon Belmont

    The Snapdragon 800 can capture and playback 4K video already. It can also display it to 4K screens. See: http://www.qualcomm.com/snapdragon/processors/800 .

    I think h.265 is the big thing here (and the double RAM bandwidth, too). Smaller video files (lower bitrate), but same video quality.

  • Ishaan Rajiv

    4K... Because 1080p is too mainstream, these days.

  • Frank Reiter

    Think Android TV, or Chromebook. For a phone I'm not convinced that anything over 720p is useful.

  • Bronislav Shtrom

    I wonder if this will make it in time for the Galaxy S5?

  • Matthew Fry

    4k displays you say... There's a tablet coming out soon with a 4k display...

  • Dee Norbert

    The question is it can run Crysis 3 at maximum settings in 4K resolution. :lol:

  • Jamal Adam

    I shall assume that the M8 from HTC will come with one of these.

  • Allan

    I'm liking the prospect of the next generation of Adreno GPUs and the much-boosted LPDDR3, but not so much in terms of what Qualcomm has done with Krait. Krait 200 at the time of its introduction was a revolutionary core, providing performance very competitive to the A15, which was still months from implementation (of course A15 turned out to be a relatively big flop considering that it barely manages to rival Krait 300/400 with high power consumption and incapability of hitting 2+ GHz.

    Krait 300 was nice given the L2 cache prefetcher speeded up things quite a bit, but other than breaking the 2 GHz barrier, Krait 400 isn't very innovative. It looks like Krait 405 isn't going to be a game changer either (Nexus 5 faux kernel already manages 2.5 GHz overclock). As annoying as this is going to sound, Snapdragon needs a little more oomph on the CPU side of things to match Cyclone, which despite its measly dual-core configs, is pretty impressive.

    Let's hope Qualcomm doesn't pull off an AMD and join the GHz race.

    • Sean Lumly

      Cyclone is a great performer, but it's easy to ignore that a single Cyclone core is easily the size of 2x Krait 400 cores on die. It is a HUGE core compared to today's cores, and I feel the media has mis-represented it's single-thread performance as being 'magic', when it certainly has more-than-a-little to do with its mammoth die-area. The details of the actual implementation are hard to come by though, so little can be concluded. For example: many benchmarks may be skewed upward by its built-in crypto instructions, though these may not be commonly used in day-to-day workloads. It also (more than likely) has more pipelines!

      In the end it's about trade-offs. Quad-core setups *should* perform better than the Cyclone dual-core for high-utilization multi-threaded apps (benchmarks seem to confirm this). I wouldn't count Krait out just yet, though. There's much more to it than a single-thread to single-thread perf comparison.

      Now, Imagination Tech (the folks behind PowerVR) is getting ready to storm int the Android fray with yet another core called Meta which is derived from their acquisition of MIPS a while back. The interesting thing about this core is how incredibly efficient it is, and how well it mitigates stalls and thus hides memory latency. It's also comparatively *tiny* on die making it a very enticing core. I have read that a single Meta core can perform around as well as a quad-core Cortex A9 -- that's incredible performance. This should make it a great performer and attractive for markets with smaller (read: cheaper) chips like the emerging markets of developing nations.

      Meta uses a different instruction set architecture (ISA) than ARM, making it incompatible. But apparently 93% of play apps that are written to be portable (ie. Dalvik/Renderscript/Filterscript), are already running on it. It's really interesting stuff!

      • Allan

        Well, that probably explains why Apple doesn't have enough die space for a quad Cyclone part.

        All this talk about META (although there is scarce info on the web about it) is exciting, but in reality, the odds of a major OEM adopting META in the next 2 years is about as slim as Samsung switching to Tizen next year. Of course, Krait 400 isn't really meant to compete against Cyclone; that's what A57 is for.

        Correct me if I'm wrong, but isn't Bay Trail-T at sub-2 GHz speeds already capable of laying waste to Cyclone (and BTT is 32-bit but 64-bit ready)? Which, if it's correct, then there's nothing to be excited about here. Apple always releases their "new and revolutionary" SoC and it's matched within 3 months and completely quashed within 5.

        • Sean Lumly

          A quad-core Cyclone part -- assuming a standard 100mm^2 SoC -- would basically mean that you have no GPU! :P

          Yes and yes. I'm betting that Meta will not be targeting the premium market to start. Rather, they will be going after smaller SoCs for developing countries, and the low range market. But I feel that in time they will scale their CPU up to more high-end competitive sizes. Still, if I were ARM, I would be more than a little worried; Imagination is a fierce competitor.

          I'm only slightly aware of Intel's products. It seems that Baytrail is also a big core, and yes, it should compete favourably with Cyclone at similar clocks. Of course, it typically has a process advantage (22nm) so perf/mm2 is a question mark (for me, anyway), and I would guess that perf/W is below that of Cyclone. Intel has failed to break into the smartphone/tablet market in a meaningful way, which makes perf comparisons difficult.

          Anyway, Apple may have a fat 64-bit core, but the industry is heading in that direction anyway. It's been speculated that they are using a modified A57 core, which implies that in a few months, most chips will be similarly spec'd, and Apple's A7 will certainly be out-performed. It is likely already in perf/mm2 if you consider high-utilization multi-threaded workloads as benchmarks elude to.

          I'm not really moved by the hype surrounding Cyclone. I'm also not moved by the differences between Krait and A15. The real differences in performance are felt in software startup times and GPU throughput. The first one is largely affected by software, and the second is unrelated. CPUs, while necessary, are generally not where real-world perf is determined these days. And with GPU compute becoming increasingly popular, I don't expect this trend to reverse.

          • Allan

            I'm pretty sure Intel made Bay Trail-T already 64-bit, but from what I read a 64-bit kernel was not yet available so they made do with 32-bit software.

            Somewhere it says that Intel is prepping BTT with a 64-bit kernel soon. But I'm at least going to wait until their next SoC to pass any judgement because their HD Graphics is an extreme disappointment. Intel let down the Silvermont core with their "new and improved HD Graphics".

            Any idea on what Qualcomm will do for their first 64-bit core? I heard that the S805 is coming with 64-bit LPAE support.

          • Sean Lumly

            Die size is everyone's problem, and Apple is certainly no exception. Not only are larger dies more expensive and implicitly cut into profitability (shareholders do not like that, especially with the huge volumes Apple sells: eg. even $1 per device can mean hundreds of millions of profits for a single SKU), utilizing that increased die-space invariably means more power draw. The second point is the one that's most important. Because all devices are constrained by volume and thus battery size/life, all companies must make design tradeoffs to maximize both performance and battery life.

            The increased performance has come from two vectors primarily: increasingly sophisticated circuit logic (eg. greater efficiency), and denser circuits (ie. smaller process). Raising the thermal limit isn't an option, as maintaining battery life is a top concern for mobile devices. As the process shrinks, the circuits consume less power and run cooler (AFAIK), thus you can ramp up the clocks, and do more on die compared to a larger node.

            This is why mobile has been moving so quickly in performance year-over-year: each year the process has shrunk giving us a "Hyper Moore's Law". This will likely end the year after next, as mobile meets the industries 14nm and 10nm node (if Samsung/TSMC can reach that process quickly).

            But there's also the vector of hardware efficiency, and mobile has shown incredible innovation on this front with better methods of computing data, which may mean faster performance year-over-year into the foreseeable future.

            I'm not sure what Qualcomm will do with their 64bit core. I think that they'll keep with the asynchronous core idea, as well as the pipeline bredth/depth changes that come with a new core. But beyond that I have no clue.. :P

            LPAE was a 40bit address space for virtualization, so that a 32-bit core could access much more memory. This was a very good 'quick fix' for servers that were incorporating ARM chips, but not really relevant for mobile that typically address < 4GB. 64bit cores imply very large address and render LPAE unnecessary. So I'm pretty sure Qualcomm will not include this -- nor ARM for that matter!

            But I think stacking memory on top of the AP will start to have a dramatic effect on the way the chip logic is organized. Memory bandwidth is one of the largest performance sinks, and if I were being really creative, I could imagine being creative with AP to DRAM pathways, or multiple banks to increase bandwidth and lower latency significantly (thus improving performance, and lowering power consumption), or perhaps even higher process (eg. 28-32nm) caches sitting between the SoC and DRAM.

            There's also a big push for hardware ray-tracing on mobile, and I wouldn't be surprised to see Qualcomm racing for this trend. Imagination certainly is (read: Caustic), and I recall ARM talking about this as well.

  • Roland Golden Bay

    I think the snapdragon 805 CPU is the most powerful CPU for the moment