Author Archives: Mellanie, Wn.s,.

New composite image shows auroras on Uranus

Published / by Mellanie, Wn.s,. / Leave a Comment

The gas giants Jupiter and Saturn have garnered more attention over the years because they’re closer, and let’s face it, they look cool. Uranus is easy to overlook out there in the outer solar system, but there are some newly released images from NASA and the ESA that might make you notice it again. The images are a composite of data from Hubble and the Voyager 2 probe showing aurora activity in the planet’s atmosphere.

Uranus is the third largest planet in the solar system after Jupiter and Saturn. It’s quite a bit smaller than Saturn, actually, with a diameter of 15,759 miles (25,362 km). Saturn is more than twice as large, but you could still fit 63 Earths inside Uranus. The Planet appears as a uniform blue-gray globe from a distance, but there are some subtle pattern in the clouds when viewed in certain wavelengths of light. It also has a ring system — it’s no match for the majestic rings of Saturn, but it’s got Jupiter beat in that department. In addition, Uranus has the distinction of rotating with an axial tilt of 97 degrees — almost parallel to the plane of the solar system. Astronomers hypothesize it was struck by a smaller planet in the distant past that tipped it over on its side.

The above images show bright auroras glowing in the clouds of Uranus, a phenomenon that was only confirmed in 2011. Astronomers had previously seen auroras on other gas giants like Jupiter and Saturn, but never Uranus. Auroras are caused by streams of charged particles like electrons picked up by the solar wind or from a planet’s own ionosphere. They are channeled into the upper atmosphere by the planet’s magnetic field, where they interact with gas molecules like oxygen and nitrogen. The ionized gas then gives off light, which we can observe.

Uranus,_Earth_size_comparison_2

The blue disk of Uranus in the above images comes from the snapshots taken by Voyager 2. It executed a flyby of the planet in 1986 on its way to the edge of the solar system. This is the only mission to get so close to Uranus, so these images are still among the best we have. The auroras and the ring system parts of the image come from Hubble, based on data acquired in 2014. Those observations used the ultraviolet capabilities of the Space Telescope Imaging Spectrograph (STIS) instrument on the space telescope.

The team that captured the auroras tracked the interplanetary shocks resulting from powerful bursts of solar wind. When the time came for those gusts to reach Uranus, Hubble was ready and watching. This analysis led to evidence that these huge auroras actually rotate with the planet. The team also re-discovered Uranus’ magnetic poles, which were “lost” following Voyager 2’s visit because of uncertainty in the measurements.

Source : https://www.extremetech.com/extreme/247511-new-composite-image-shows-auroras-uranus

Astronomers spot a massive explosion from the collision of two stars

Published / by Mellanie, Wn.s,. / Leave a Comment

The Orion Molecular Cloud Complex is a group of nebulae and stellar nurseries positioned some 1,500 light years away from Earth. This is a place where new stars are born, but also apparently where they meet an early end. New observations from the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile reveal a giant fireworks display in Orion Molecular Cloud 1 (OMC-1) that was caused by two young stars colliding.

The ALMA data was processed and released by scientists working with the National Radio Observatory (NRO). The NRO timeline for the explosion is a bit vague. It says the collision took place 500 years ago, but it takes light from OMC-1 1,500 years to reach us. I suppose the above image is a representation of the explosion 500 years after it occurred. If you tack on 1,500 years of light to reach us, that puts the actual event around 2,000 years ago.

Most stars form far enough apart that collisions are unlikely. However, OMC-1 is densely packed, and newly formed stars essentially drift at random. Over time, young stars slow down and fall toward the local center of gravity. In this case, two protostars ended up too close together and collided. The explosion launched material from the stars (and other nearby objects) outward at more than 150 kilometers per second (93 miles per second). This one event released more energy than our sun does in 10 million years.

The visible remains of a protostar collision are rather short lived by astronomical standards, and much of the debris isn’t in the visible spectrum from such a great distance. That’s why the radio frequency observations of ALMA were so revealing, but we were still lucky that ALMA managed to spot this one. You can see above the extent of the fireworks display, overlaid on a Hubble image of the relevant section of the Orion Molecular Cloud. The bright streamers represent the emissions of carbon monoxide gas as it’s propelled outward.

The short lifetime of the visible evidence of these collisions makes it hard to estimate how common they are, but astronomers suspect it’s a frequent occurrence in stellar nurseries like OMC-1. The first hints of this explosive event were uncovered in 2009 by the Submillimeter Array in Hawaii. That instrument lacked the power to reveal the true scale of the starburst formation. The same was true of infrared observations made with the Gemini-South telescope. Future study of the collision debris as it expands could help astronomers learn more about the conditions inside stellar nurseries like OMC-1.

Now read: The best gifts for scientists, academics, and hardcore science geeks

Source : https://www.extremetech.com/extreme/247456-astronomers-spot-massive-explosion-collision-two-stars

Nvidia claims Pascal GPUs would challenge Google’s TensorFlow TPU in updated benchmarks

Published / by Mellanie, Wn.s,. / Leave a Comment

Last week, we covered Google’s internal benchmarks of its own TPU, or tensor processing unit. Google’s results revealed that the TPU is much faster than a conventional GPU for processing inference workloads, at a fraction of the power consumption. While machine learning still takes place on GPUs, Google’s TPU results are a significant leap forward for inference processing — and Nvidia, as one might expect, has its own take on those numbers.

According to a new blog post published by Nvidia, the comparison would’ve been quite different if Google had used its Pascal-class GPUs instead of relying on the older, Kepler-based, dual-GPU K80. Here’s Nvidia:

Its team released technical information about the benefits of TPUs this past week. It asserts, among other things, that the TPU has 13x the inferencing performance of the K80. However, it doesn’t compare the TPU to the current generation Pascal-based P40.

Nvidia’s claim that the TPU has 13x the performance of K80 is provisionally true, but there’s a snag. That 13x figure is the geometric mean of all the various workloads combined, as shown below:

PerformanceChart

MLP0 – CNN1 refer to workloads. GM = geometric mean, WM = weighted mean. Since Google knows how much each individual workload contributes to the total workload suite, the weighted mean figures should be treated as more accurate.

As Google notes, it’s a good analysis method when you don’t know how much each application contributes to the program mix. In this case, however, we do know that — and therefore, the more appropriate column to use is “WM,” which stands for “weighted mean.” Adjusted for application contributions, the gap between the TPU and the K80 increases to 15.3x. And, of course, that gap varies substantially from workload to workload, from no gap on LSTM1 to a 60x gap on MLP1.

For reference, here’s the slideshow we used in last week’s story, with competitive performance figures for Haswell, K80, and Google’s TPU:

Nvidia’s argument is that Pascal has vastly higher memory bandwidth and far more resources to throw at inference performance than K80. The net result of these improvements, according to Nvidia, is that the P40 offers 26x more inference performance than one die of a K80.

P40

It’s not clear which inference tests Nvidia is referring to with its claim of 26x improvement, and the varying results in the slideshow above demonstrate that the relative performance gap between Nvidia and Google is highly workload dependent. It’s also not clear if Nvidia’s claim takes Google’s tight latency caps into account. At the small batch sizes Google requires for its 8ms latency threshold, K80 utilization is just 37 percent of maximum theoretical performance. The vagueness of the claims make it difficult to evaluate them for accuracy.

RelativePower

Google’s enormous lead in incremental performance per watt will be difficult to overcome. Click to enlarge.

Google’s whitepaper (PDF) also anticipates this kind of claim. The researchers also disclosed that they’ve has modeled the expected performance improvement of a TPU with GDDR5 instead of DDR3, with substantially more memory bandwidth. Scaling memory bandwidth up by 4x would improve overall performance by 3x, at the cost of ~10% more die space. There are ways, in other words, to boost the TPU side of the equation as well.

LargerTPU

Click to enlarge

There’s no arguing the P40 is much faster than K80; Nvidia’s documentation shows a 3-4x performance boost in inference workload between Pascal and Maxwell, to say nothing of Kepler. Even so, Google’s data shows a huge advantage for TPU performance-per-watt compared with GPUs, particularly once host server power is subtracted from the equation. This is the classic problem with trying to use a GPU against a custom-built ASIC — at the end of the day, a GPU contains a great deal of power-chewing hardware that a chip like Google’s TPU simply doesn’t need.

A matter of resources

It would have been interesting to see how Google’s TPU matched up against Nvidia’s newest and most powerful Pascal architecture, but I strongly suspect that it wouldn’t tell us much about which kind of solutions vendors are likely to use. For companies like Google, Microsoft, and Facebook, custom-built ASICs offer the prospect of vastly improved efficiency. The cost of researching and building the ASIC can be tolerated because each company has the pockets to fund it and a guaranteed market for the final product. Google’s TPU is custom-designed for very specific workloads and excels at them. A GPU like Nvidia’s P40 is designed to perform well in a wider range of workloads with varying characteristics.

Most companies, including plenty of Fortune 500 companies that might like to deploy deep learning or AI software, lack the expertise to handle in-house development. Companies that have this ability may well build custom circuits to handle future development, but the majority of firms will probably stick to using GPUs, at least for the foreseeable future.

Source : https://www.extremetech.com/computing/247403-nvidia-claims-pascal-gpus-challenge-googles-tensorflow-tpu-updated-benchmarks

Drill baby drill: Japanese scientists hope to reach the Earth’s mantle in massive borehole project

Published / by Mellanie, Wn.s,. / Leave a Comment

Japanese scientists have announced a plan to drill through the Earth’s crust and reach the mantle. The major initiative would be a first for humankind. Despite multiple previous attempts and multiple boreholes of significant depth, we’ve never managed to drill far enough to see what lies beneath the Earth’s rocky crust. Instead, our knowledge of the mantle is mostly based on indirect observations, like the speed at which seismic waves propagate through the planet’s internal geometry. A 2007 investigation into a unique area between Cape Verde and the Caribbean Sea, where the crust of the Earth is missing and the mantle is directly exposed, yielded some fascinating rock samples and scientific data, but not the same information that scientists hope to gather by drilling into the molten layer directly.

ChikyuDrill2

Image by The Yomiuri Shimbun

The new project, led by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), will begin by conducting a two-week study off the coast of Hawaii. If this location isn’t suitable, the research team plans to investigate areas offshore from Costa Rica and Mexico. All of the drilling sites are in the ocean because the Earth’s crust is roughly twice as thick on land as it is over water. Even so, this is no small task. Chikyu’s drill will have to pass through 2.5 miles of water and 3.7 miles of crust to reach the mantle, which accounts for ~85% of the Earth’s volume.

We already know that the mantle is comprised of different material than the Earth’s crust. Mantle material has a higher ratio of magnesium to iron than Earth’s crust, but contains less silicon and aluminum than our planet’s surface does. We also know that the mantle slowly circulates thanks to convection currents. As the graph below shows, hot spots deep in the region where the mantle meets the Earth’s core lead to an upwelling of heat several thousand kilometers away.

Convection-snapshot

Image courtesy of Wikipedia

But beyond these facts, there’s a great deal we don’t know. There are questions regarding how hot spot formation relates to earthquakes and volcanic eruptions on the Earth’s surface, how convection currents interact with tectonic plates, and how the mantle can apparently suffer its own earthquakes at depths of 250 to 420 miles. The Japanese government is helping to fund the Chikyu expedition because it’s hoping the information we gain can be used for earthquake prediction. The research team is hoping to investigate interactions between the crust and mantle to understand how the planet’s crust formed in the first place, and to determine exactly how deep within the earth microbial life can exist.

The Japanese team hopes to begin drilling by the early 2020s, with 2030 set as a maximum deadline. The site surveys aren’t the only important issue to be addressed — Chikyu will have to test the six-kilometer drill pipe it plans to use, and the Japanese may be hoping to find other nations interested in helping to bankroll the project.

Source : https://www.extremetech.com/extreme/247370-drill-baby-drill-japanese-scientists-hope-reach-earths-mantle-massive-drilling-project

Scientists find Earth-like planet with an atmosphere just 39 light-years away

Published / by Mellanie, Wn.s,. / Leave a Comment

If you’re looking for an Earth-like planet with an atmosphere, you need only go outside. Alternatively, you could travel some 39 light-years toward the constellation Vela. That’s where astronomers have discovered an exoplanet called GJ 1132b, which appears to have an atmosphere. While it might not be particularly hospitable to human life, some other organism might find it quite cozy.

GJ 1132b was first detected in 2015, but we didn’t know about its atmosphere. Astronomers at the time reported the planet was Earth-like, at least in a broad sense. It’s about the same size, and has a similar mass. However, it’s orbiting close to a red dwarf star—a year for GJ 1132b is only 1.6 Earth days. That means it’s probably exposed to a great deal of radiation, and the surface temperature is believed to be around 700 degrees Fahrenheit (371 degrees Celsius). The planet is also tidally locked to the star, meaning it does not rotate. So, one side is roasted and the other frigid.

Detection of GJ 1132b was accomplished with the transit method, which watches for dips in light as a planet passes between its star and Earth. The new analysis of GJ 1132b was led by John Southworth, an astrophysicist at Keele University in the UK. After the initial discovery, Southworth began working on a modification of the transit observation that could detect an envelope of gas around the exoplanet. The team found a measurable dip in “z-band” light during GJ 1132b transits. This absorption of this near-infrared wavelength of light indicates that GJ 1132b most likely has an atmosphere.

Comparison of Earth and GJ-1132b.

The existence of an atmosphere around GJ 1132b isn’t all good news for habitability. There’s likely a greenhouse effect of some sort, which would push the temperature even higher. It’s safe to say that even the advent of interstellar travel would not make GJ 1132b a suitable vacation spot. However, this could have huge implications for habitability elsewhere in the universe. Red dwarfs are the most common type of star in the observable universe. If this exoplanet orbiting a red dwarf has an atmosphere, many others do too. Maybe some of them are more pleasant and legitimately Earth-like than GJ 1132b.

Simply being able to confirm an atmosphere around an exoplanet is a big step forward in the search for extraterrestrial life. The existence of life on a planet will likely have an effect on the atmosphere, which is something we may be able to detect. Our method would need to be much more sensitive first. It’s possible the James Webb Space Telescope will be able to gather data on the composition of exoplanet atmospheres when it launches in a few years.

Source : https://www.extremetech.com/extreme/247340-scientists-find-earth-like-planet-atmosphere-just-39-light-years-away

Microsoft finally reveals exactly what telemetry Windows 10 collects about your PC

Published / by Mellanie, Wn.s,. / Leave a Comment

Ever since Microsoft launched Windows 10, privacy advocates and concerned users have loudly argued that the firm needed to improve and clarify its privacy policies. To date, the company has mostly ignored those requests, though Windows 10’s recent Creators Update did move the ball forward in some meaningful ways. But as of today, the company is doing more to explain how, where, and when it collects data, as well as explaining how that information is used.

First, everyone who updates to the Creators Update will be presented with a privacy screen showing them what their current device settings are in certain key areas, as shown below:

UpdateOn

Everyone who installs the Creators Update will be offered the option to change these settings.

Previous updates to Windows 10 didn’t flag users with the option to adjust their privacy settings, so we’re glad to see Microsoft deliberately informing current users that they have the option to change settings they may not have been aware of prior to the update.

Users who are installing Windows 10 for the first time will see the updated Privacy settings we’ve highlighted before, but Microsoft makes a nice point in its blog post: The privacy settings are now designed to fit into two columns on a monitor, as opposed to requiring end-users to scroll down to find the rest of them. That kind of tweak enhances discoverability, and while it’s a small thing, we’re glad to see it.

Microsoft has also updated its own documentation to make it extremely clear which data is gathered and under what circumstances. Here’s a snip from the article on Basic collection, by Brian Lich:

The Basic level gathers a limited set of information that is critical for understanding the device and its configuration including: basic device information, quality-related information, app compatibility, and Windows Store. When the level is set to Basic, it also includes the Security level information. The Basic level helps to identify problems that can occur on a particular device hardware or software configuration. For example, it can help determine if crashes are more frequent on devices with a specific amount of memory or that are running a particular driver version. This helps Microsoft fix operating system or app problems.

There’s an enormous section (30,000 words) following this, which defines all of the extensions, events, and information that can be collected in Basic mode, in case you want to investigate the situation for yourself. I won’t claim to have read the novella, but I have spot-checked it, and there’s no sign that I can see of MS collecting anything nefarious in “Basic” mode. The reason the list is so long is because Microsoft’s diagnostic data gathering is rather thorough. The Census.* section is a decent example of this:

CensusFunction

Click to enlarge.

The information Microsoft is gathering in its Basic telemetry option appears to be confined to general data about the system state as opposed to anything personal. In this specific case, I think the company’s claim that it gathers this information for bug-fixing and troubleshooting purposes passes muster. Microsoft has long used such programs — that’s how we found out that 22 percent of Windows crashes in Windows Vista were actually caused by buggy Nvidia drivers.

The company has not yet published a Robert Jordan novel exhaustive list in the same fashion for its “Full” collection mode, but it has documented exactly which kinds of information are collected in various scenarios and the circumstances in which this information is sent to Microsoft. Much of the data gathered in “Full” mode is still general system state information, but there are places where personally identifiable information (PII) may be sent to Redmond. Here’s an example of the “Browsing, Search, and Query data.”

BrowsingSearchQuery

Click to enlarge

Disclosing this kind of information is a critical step towards answering the privacy questions concerned Windows users have been raising for at least two years (questions about privacy in Windows 10 were being raised well before the OS actually launched). Microsoft’s privacy policy has also been updated. It’s fairly well written, easy to understand, and features a refreshing lack of weasel words.

A definite step in the right direction

The Creators Update was already expected to improve Windows 10’s privacy controls, but publishing these secondary documents is, we think, an important demonstration of good faith. We still maintain that the best way for Microsoft to deal with these privacy issues is to give users the option to opt-out of telemetry gathering altogether. Similarly, there are going to be users who remain disinterested in Windows 10 until and unless Microsoft adopts this stance.

For users who have been somewhere in the middle — unhappy with the state of things, but not swearing off the OS just yet — these updates and additional changes should answer a lot of previous concern. I honestly don’t know why it took Microsoft so long to realize it should deal with user concerns by actually releasing information, as opposed to offering a vague, hand-waved “trust us,” but we’re glad to see the company finally disclosing this data. A few weeks ago, we criticized Microsoft for claiming to value user feedback while ignoring a huge chunk of its audience who were unhappy with privacy in Windows 10. We’re not declaring victory just yet, but it looks as though Microsoft has been listening at least a little more than we thought they were.

Source : https://www.extremetech.com/computing/247311-microsoft-finally-reveals-exactly-telemetry-windows-10-collects-pc

25 best Android tips to make your phone more useful

Published / by Mellanie, Wn.s,. / Leave a Comment

Android has overtaken Windows as the most popular computing platform in the world. That’s thanks in part to how fast new features have been added. Google is always making tweaks and coming up with new features for Android, and OEMs like Samsung and LG can add their own stuff on top of that. It can be hard to keep up, so we’ve gathered the 25 best tips for your Android phone right here.

 And much more…

After all of the above, you should be a pro at using Android. This is just the beginning, though. There’s a lot more to discover in Android, and every device is a little different. So, don’t be afraid to poke around in the deep, dark corners of the settings and see what you can find.

Now read: 10 best Android apps for power users

Check out our ExtremeTech Explains series for more in-depth coverage of today’s hottest tech topics.

Source : https://www.extremetech.com/mobile/223282-25-best-android-tips-to-make-your-phone-more-useful

Astronomers switch on globe-spanning Event Horizon Telescope

Published / by Mellanie, Wn.s,. / Leave a Comment

You’ve probably seen many images over the years that represent a black hole, but none of them are actually images of a real black hole (including the one above). They’re all artist’s renderings, or possibly a real image of the superheated gas around a black hole. Astronomers around the world have banded together and flipped the switch on a project called the Event Horizon Telescope. The international team hopes they’ll generate the first ever image of a black hole by linking up the data from radio telescopes all over the world.

There are a number of problems that have prevented scientists from seeing a black hole. For one, there aren’t any close by, which is actually a good thing if you don’t like being torn apart by tidal forces and sucked into oblivion. Black holes are also physically smaller than you’d expect, despite their high mass. It’s the high density that gives a black hole such incredible gravitational pull. There’s also the matter of all the electromagnetic waves being pulled into a black hole instead of emitted where we can see them.

Astronomers will use the Event Horizon Telescope to look at two different supermassive black holes. One is the black hole in the center of our own galaxy, which is known as Sagittarius A* (pronounced “Sagittarius A Star”). The other is at the center of a nearby galaxy called M87, also known as Virgo A. It’s one of the largest galaxies in the local universe, and is famous for having a gigantic jet of matter blasting out from the black hole at its center.

The jet of matter ejected from M87 by its black hole.

The Event Horizon Telescope consists of eight radio telescopes around the world, all of which will cooperate by observing the same objects. Using radio frequencies will allow astronomers to peer through the shroud of dust and gas that usually obscures black holes. The target is a halo of superheated gas believed to circulate above the event horizon as it’s pulled in. Just one telescope wouldn’t be able to pull in enough clean data to produce an image of that halo, but a network of telescopes spanning the globe might.

Observations for this project began on April 4th and will run through April 14th. The data acquired by each site will then be transported to labs at the Max Planck Institute in Germany and MIT’s Haystack Observatory. Combining the data should help cancel out the noise and reinforce the event horizon halo’s signal. That process is likely to take several months, though.

So, in a few months, we could finally see what the event horizon of a black hole looks like. This may help answer a number of long-standing questions about physics and the nature of galactic evolution.

Source : https://www.extremetech.com/extreme/247305-astronomers-switch-globe-spanning-event-horizon-telescope

Nvidia debuts new Titan Xp top-end GPU, now with Mac support

Published / by Mellanie, Wn.s,. / Leave a Comment

When Nvidia launched the GTX 1080 Ti at the end of February, it short-circuited its own highest-end product, the 6-month old Nvidia Titan X, thanks to higher clocks and a much lower price tag ($700, compared with $1,200). Now, Nvidia is rectifying that issue with a full-fat GP102 part — the Titan Xp.

The Titan Xp sports 3840 cores, 240 texture units, and 96 ROPS, compared with the 1080 Ti’s 3584:224:88 configuration. We don’t know the card’s base clock yet, but Nvidia’s press release claims 3840 CUDA cores running at 1.6GHz. It’s not clear if that refers to the base clock — if so, the Titan Xp would be clocked significantly higher than any other GP102 on the market today. We know it packs an eye-popping 547GB/s of memory bandwidth and a 12GB frame buffer.

TitanXP-Image1

The purpose of a card like this is simple: At $1,200, it isn’t meant to represent a particularly great deal; it’s meant to serve as a halo product for those particularly discerning and well-heeled customers who want something just a little faster, a little nicer, than everything else on the market. Even the 1080 Ti is going to be a better deal than the Titan Xp, as far as price/performance ratios are concerned.

New driver sets add Mac support

The other interesting tidbit to come out of this announcement is that Nvidia is promising a new driver set, due later this month, that will add full Mac support for the entire Pascal product lines. Anyone with a 10-series Nvidia GPU should be able to use that GPU in their Mac, no problem… provided, of course, that you have a relatively old Mac Pro.

There’s a certain level of absurdity to the current situation. Unless you buy an external dock and create a custom solution for yourself via Thunderbolt, you’re not plugging a 10-class GPU into a MacBook Pro, MacBook Air, or MacBook. iMac? No way. Mac mini? The GPU is larger than the system. And we already covered the limitations of the current Mac Pro, and why Apple has to redesign its own “groundbreaking” chassis to allow it to work with modern hardware.

It isn’t clear if these drivers are limited to Mac Pros that support macOS Sierra (10.12) or not. Only the mid-2010 and mid-2012 Mac Pros were updated officially for Sierra, though apparently Xeon workstations from 2008 and 2009 can be unofficially upgraded to Sierra (at which point, presumably, the 10-series GPU would still work with them).

Nvidia could also be hinting that it expects to take AMD’s slice of the Apple market when Apple refreshes the Mac Pro. Apple has relied on AMD GPU silicon for a number of years, but that could change if Team Green offered them a sweet enough deal.

Source : https://www.extremetech.com/gaming/247296-nvidia-debuts-new-titan-xp-top-end-gpu-now-mac-support

Microsoft’s new Project Scorpio Xbox could blow the PS4 Pro out of the water, challenge high-end PCs

Published / by Mellanie, Wn.s,. / Leave a Comment

Microsoft finally announced some of the specifics of its upcoming Project Scorpio refresh, and the implications for the Xbox One product line are enormous. This isn’t just a refresh or a doubling-up of existing resources, like Sony used with the PS4 Pro. This is something altogether different, and Microsoft doesn’t seem to be just gunning for the PS4 — it’s taking on the PC market as well.

Project Scorpio tech specs

Project Scorpio will feature 40 ‘customized’ Radeon compute units (2,560 cores, presumably) clocked at 1172MHz. The clock speed boost alone is 1.37x higher than the Xbox One, while the GPU’s core count has increased by 3.33x.

XboxScorpioSoC

The new Project Scorpio SoC, built on TSMC’s 16nm FinFET proces. Image by Digital Foundry

Data from Digital Foundry strongly suggests Scorpio is based on Polaris, not Vega. L2 cache is said to have quadrupled, from 512KB to 2MB, which would match the RX 470 and RX 480 configurations. Microsoft is said to have doubled the number of render backends on the chip from 16 (Xbox One) to 32 (Scorpio), which again fits the Polaris architecture. Scorpio is actually slightly larger than RX 480, at 2560 cores as opposed to 2304 cores. That’s an 1.11x increase in GPU cores and a 7.5% decrease in GPU frequency compared with AMD’s PC variant. Microsoft claims a 2.7x increase in GPU fill-rate — more than enough for 4K gaming (according to MS).

RX480-Technology

The RX 480 looks like the closest analog to the GPU inside Project Scorpio. Most of the features line up, apart from obvious differences like 40 CUs instead of 36.

Memory bandwidth also gets a huge kick upwards. The Xbox One was criticized for its decision to rely on cheaper DDR3 rather than going the unified GDDR5 route Sony chose. Scorpio will use GDDR5, and it offers 12GB of memory, up from 8GB. That 12GB of RAM is accessed via a series of 32-bit memory buses, for a total of 326GB/s of memory bandwidth. That’s nearly 5x the DRAM bandwidth of the Xbox One, not counting the 32MB of cache. It’s also on par with what the GTX 1080 offers.

A quote from Digital Foundry suggests Microsoft has done some significant work customizing its GPU front end. From the article:

the most exciting aspect surrounding the CPU revamp doesn’t actually relate to the processor blocks at all, but rather to the GPU command processor – the piece of hardware that receives instructions from the CPU, piping them through to the graphics core.

“We essentially moved Direct3D 12,” says Goossen. “We built that into the command processor of the GPU and what that means is that, for all the high frequency API invocations that the games do, they’ll all natively implemented in the logic of the command processor – and what this means is that our communication from the game to the GPU is super-efficient.”

Processing draw calls – effectively telling the graphics hardware what to draw – is one of the most important tasks the CPU carries out. It can suck up a lot of processor resources, a pipeline that traditionally takes thousands – perhaps hundreds of thousands – of CPU instructions. With Scorpio’s hardware offload, any draw call can be executed with just 11 instructions, and just nine for a state change.

“It’s a massive win for us and for the developers who’ve adopted D3D12 on Xbox, they’ve told us they’ve been able to cut their CPU rendering overhead by half, which is pretty amazing because now the driver portion of that is such a tiny fraction,” adds Goossen.

These changes seem to further point to a vastly more powerful console than anything Microsoft has previously shipped, but what about the CPU?

As for Scorpio’s CPU, Microsoft isn’t revealing all its secrets just yet. The chip is now clocked at 2.3GHz (up from 1.73GHz), but it’s not a Ryzen processor. Microsoft is only saying that it’s heavily customized the core to noticeably improve its overall performance. We’ve speculated before about how MS might accomplish that — Jaguar definitely had some low-hanging fruit, including its half-speed L2 cache and relatively slow memory controller. Just addressing those issues would speed the CPU up quite a bit. One thing Microsoft didn’t do, however, was move to a unified eight-core solution.

Eurogamer notes that some of the customizations to Scorpio include latency reductions and improved CPU/GPU coherency, all of which should improve performance, even apart from clock speed improvements.

Cooling design

Even on a 16nm process node from TSMC, this much firepower is going to generate a lot of heat — but Microsoft has borrowed a trick from the PC industry to keep its console cool. Scorpio will use a vapor chamber heat sink to help move heat out of the APU and into the heat sink. Instead of a common axial fan, Scorpio uses a centrifugal fan that “kind of looks like a supercharger on a car.”

VaporChamber

Project Scorpio’s vapor chamber. Image by Digital Foundry

Microsoft includes an internal power supply for Scorpio rated at 245W. AMD may have benefited from improved process node design at TSMC, since the RX 480 alone would’ve accounted for most of that power supply’s rated output.

Conclusion-ish

We’ve got more to say about Scorpio and its ability to render in 4K, backwards compatibility, and some other aspects of the design, but the base tech specs give us more than enough meat to chew on for one article.

Barring calamitous problems, Microsoft has built a console that should easily outperform Sony’s PS4 Pro. The new Scorpio is faster, its GPU is larger, and it can dedicate a full 8GB of RAM to gaming, with 4GB reserved for the OS. Junking the 32MB RAM cache in favor of relying on GDDR5 should also make it easier for developers to target the platform.

449811-project-scorpio

Digital Foundry saw Scorpio running Forza 6 Apex at a speed and fluidity they’ve only seen matched by the GTX 1080 on a PC. I’m not suggesting anyone treat that single metric as proof the Xbox Scorpio is as powerful as Nvidia’s second-most powerful GPU, but it’s a very impressive showing for a console that hasn’t even launched yet. Four years after launch, Xbox One sales still trail Sony by a nearly 2:1 margin. There’s a good reason for that — the Xbox One is significantly less powerful than Sony’s console, and it nearly always loses to its rival in terms of visual fidelity or performance when the two are compared.

Scorpio is set to change all that — and Eurogamer thinks it’ll debut at just $499.

Now read: The best free games on the Xbox One

Source : https://www.extremetech.com/gaming/247266-microsofts-new-project-scorpio-xbox-blow-ps4-water-challenge-high-end-pcs