Adobe teases AI-based tools to make great portrait photos out of lousy ones

Published / by Mellanie, Wn.s,. / Leave a Comment

Smartphones are by far the world’s most popular cameras, but they certainly aren’t the best. Portrait photos, for example, often look awkward because of the wide-angle lens, flat because there is limited depth-of-field, and boring, because the photographer usually doesn’t have any special skills or access to advanced post-processing tools. Smartphone vendors including Google with the Pixel camera and Apple with the iPhone 7 Plus dual camera, are beginning to provide the needed hardware and some of the software to address these issues, but Adobe plans to push the envelope quite a bit further.

But will it make you a better photographer?

In a new video that teases the future of Adobe’s Sensei “AI” technology and its integration into smartphones, Adobe demonstrates how computational imaging can be used to address creating a more pleasing perspective and add a depth-of-field effect or even a different background. While the video is just a contrived video, it from watching it seems like these features would require some combination of either multiple imagers on the phone or the capture of several different versions of the portrait from slightly different perspectives — but amazingly Adobe says everything shown can be done with any portrait image in your camera roll.

Adobe takes things a step further in this video, and shows simple “one-click” access to pleasing photographic styles. This is similar to the style transfer capability that Adobe Research teased earlier this week, but simplified and shrunk into a smartphone platform. With current phones, advanced capabilities like transforming styles may require access to the cloud — which won’t make for the cool real-time previewing shown in the video. Longer-term, though, with the continuing advances in mobile GPU horsepower and smartphone storage capacity, eventually they’ll be able to run entirely on your mobile device.

Adobe's Sensei isn't just about editing tools -- it includes tools for object and facial recognition among many others

Adobe’s Sensei is helping shape the future of image processing tools

Adobe is using Sensei as the marketing term for the combination of its massive content databases (everything available on Adobe Stock, for starters) and machine learning technology. Some of it, like the deep neural nets used for the style transfer research and image recognition capability, fit pretty well with the current definition of AI. Other pieces, like audience data analysis, include more traditional machine learning technologies. The sum total though, is a future with increasingly-powerful tools for our photography. It is also a future with more and more of the investment going into computational imaging on mobile devices, which doesn’t bode well for the market for traditional standalone cameras.

Source :

ET deals: Dell Inspiron 3650 quad-core desktop PC for $389

Published / by Mellanie, Wn.s,. / Leave a Comment

For a limited time, Dell is offering up a massive discount on the Inspiron 3650 desktop PC. With today’s coupon in play, you’ll snag this powerful quad-core tower PC for 47% off the sticker price.

Dell Inspiron 3650 quad-core desktop for $389 (List price: $745.85 — Coupon code: INS389)

InpironInternally, this model has a sixth generation quad-core 3.3GHz Intel Core i5-6400 processor, integrated Intel HD Graphics, 8GB of DDR3L RAM (1600MHz), a 1TB hard drive, a DVD burner, Bluetooth 4.0, and 802.11b/g/n WiFi support.

A USB keyboard and USB mouse come along with your purchase, so you can simply hook up your existing HDTV or PC monitor over HDMI or VGA to hit the ground running. If your display only has DVI or DisplayPort, worry not. Amazon offers video adaptors on the cheap.

Windows 7 Professional (64-bit) ships on this machine by default, but you can upgrade to Windows 10 Pro (64-bit) for free. The Xbox app, Cortana personal assistant, and Edge browser come together to make Redmond’s latest OS a nice step up from previous releases. And as long as you don’t have any important legacy software lingering around, upgrading to Windows 10 is a breeze.

While previous models in this line have been a bit unwieldy, this latest redesign is a significant improvement. Measuring 13.78 by 6.06 by 11.13-inches (HWD), the new Inspiron 3650 is a 45% reduction in size from the previous case design. And as a benefit of this revision, two USB 3.0 ports, a headphone jack, and a five-in-one card reader are easily accessible on the front of the machine.

This configuration lists for over 745 bucks, but Dell is selling it online right now for just $579. Use coupon code “INS389” during checkout, and the sub-total drops down to just $389.

Our commerce group sources the best deals and products for the ET Deals posts. We operate independently of Editorial and Advertising and may earn a percentage of the sale, if you buy something via a link on the post. If you are interested in promoting your deals, please contact us at [email protected].

For more great Dell coupons head over to TechBargains.

Source :

Tesla about to become most valuable US car company

Published / by Mellanie, Wn.s,. / Leave a Comment

Tesla’s stock is surging, up 40% this year. A quarter of that growth has been this week, pushing the stock past $300 a share. Tesla’s market capitalization pushed it past Ford’s total value and it’s possible the market will close one day this week or next with Tesla worth more than General Motors.

This from a company that produced 83,922 vehicles in 2016. Ford sold 2.6 million vehicles in the US last year (a slightly different metric), about 7 million worldwide. GM sold 3 million cars in the US in 2016.

Musk tweaks Tesla short-sellers

Monday, Tesla stock soared 7 percent in the wake of news that Tesla had first-quarter shipments of 25,000 vehicles and that production plans for the Tesla Model 3 were moving ahead. Ford delivered 10 times as many vehicles, but the total fell below sales projections. That pushed Tesla’s market cap ahead of Ford’s at the market close, $48.7 billion to $45.6 billion (GM closed Monday at $51.2 billion market cap). As of Wednesday, Tesla’s market cap was just below $50 billion.

Tesla CEO Elon Musk took to Twitter to mock the short-sellers, posting: “Stormy weather in Shortville …”

Short sellers bet the stock will go down. Short sellers have lost more than $2 billion betting against Tesla this year. Market cap (capitalization) is the company’s total value, multiplying the stock price times the number of stock shares.

Maybe Tesla is more than a car company

Analysts and investors who were bearish (negative) on Tesla had good reasons, on paper at least. Tesla is pushing hard on other ventures, including the SpaceX rocket program, SolarCity solar roof installations, PowerWall home batteries, and its lithium ion battery factories.

It’s becoming clearer than stock buyers are treating Tesla less as an automaker and more as a technology venture. Unit sales and revenue don’t matter as much as what the company is capable of in the future. Regardless, Tesla has a lot riding on a smooth launch of the Tesla Model 3 in the fourth quarter.

In terms of market cap, Tesla has surpassed all but a few manufacturers: Toyota, $176 billion market cap; Volkswagen, $73 billion; Daimler (Mercedes-Benz), $72 billion; BMW Group, $54 billion; and Honda, $52 billion.

Source :

This new application of layered neural nets is about to make Photoshop amazing

Published / by Mellanie, Wn.s,. / Leave a Comment

“This looks Photoshopped.” How many times have you heard — or said — those three skeptical words? And yet every year, Photoshop and tools like it get a little more sophisticated, and it gets tougher to tell what’s a ‘shop and what’s not. More research comes out, more programs are written to incorporate the new research, and then artists get their hands on the tools and develop a deft and subtle touch.

Now a group of researchers from Cornell University and Adobe have layered neural nets atop an image style transfer AI, to create an even more powerful image manipulation tool they’re calling Deep Photo Style Transfer. It takes a reference image, often heavily stylized, and an input image. Then it clones the style of the reference image onto the input image. What it spits out at the end is startling because of how seamless the changes can be.

Left: input image. Center: reference image. Right: output image.

Using “semantic segmentation,” the authors teased apart the concepts of edges, textures, content, and style to build their neural nets. You can think of it as a combination of the magic wand tool and the heal tool from Photoshop, or perhaps as a “format painter” like the one in Microsoft Word except for photos. The study authors used their tool to swap the textures of apples, for example, and to change the weather and time of day in photos.

Semantic segmentation is most valuable in the way it can be tuned for whatever input image it receives. In a mathematical modeling sense, a tree, a building, a face, or any other element in an image will have a different set of recurring angles and weights in its edges, which a model can use to distinguish one thing from another. We’re getting closer to being able to pick out cats in images without needing an entire supercomputer facility to do it.

For example, the authors explain in the paper (PDF), “consider an image with less sky visible in the input image; a transfer that ignores the difference in context between style and input may cause the style of the sky to ‘spill over’ the rest of the picture.” Deep style transfer is capable of accounting for these differences in context, so it respects the edges while confidently changing the textures.

This looks shopped. I can tell from some of the pixels and from seeing quite a few shops in my time. Image from Fig. 7, Bala, Schechtman, Paris and Luan, 2017

“People are very forgiving when they see [style transfer images] in these painterly styles,” coauthor Kavita Bala told The Verge. “But with real photos there’s a stronger expectation of what we want it to look like, and that’s why it becomes an interesting challenge.”

Prior art in image style transfer has already hit the streets in app form; there’s an app called Prisma that can apply painterly styles onto images using AI. It’s like Photoshop, but way better than trying to get all those filters right yourself. MIT also released an app that let users creepify their own input images into nightmare fuel. In this new work, the authors started with these same methods, and added another layer of AI to ensure the semantic details of the original image are preserved in the output image. The resulting neural net can tell what parts of an image are what. In short, you get your same input photo back, but seamlessly altered to have a different visual style. It’s like they drape the textures of the reference image atop the lines and edges of the input images.

The researchers are already thinking about other applications for photorealistic style transfer. “The question of how far you can push it is important,” said Bala. “Video is a logical thing for it to go to, and that, I expect, will happen.”

Now read: What are artificial neural networks?

Source :

Apple finally upgrades the Mac Pro, admits the trash can design sucks

Published / by Mellanie, Wn.s,. / Leave a Comment

Just over four years ago, Apple unveiled a new Mac Pro that it swore would reinvent the concept of a workstation. The new system was definitely daring — it ditched internal expansion for six Thunderbolt 2 ports and told users with internal hard drives to buy new external chassis and use those instead. It shipped with dual graphics cards as a standard, despite how Apple has never demonstrated aptitude or interest in pushing GPU-centric computing (the company’s operating systems have been stuck supporting ancient versions of OpenGL for years now).

Today, the company finally took a small step towards upgrading the current Mac Pro design, but it also acknowledged what we’ve all known for years — the trash can aesthetic of the 2013 Mac Pro makes it a serious pain to work with. As of today, Apple has tweaked the Mac Pro to include a six-core CPU (up from four) in the $2,999 model and an eight-core CPU (up from six) in the $3,999 model. The GPUs have also been slightly updated; the $2,999 system now ships with dual D500s, while the $3,999 rig ships with dual D700s. Given that these are GCN 1.0 GPUs with the D700 equivalent to AMD’s old HD 7970, we can’t really recommend them.

According to Daring Fireball, Apple is planning a major overhaul to its Mac Pro lineup next year, with a more modular design and a product that’s easier to update. As for why the Mac Pro hasn’t been updated for four years, here’s DF’s explanation:

Let’s say you’re Apple. You’re faced with the following problem. Three years ago you launched a radical new lineup of Mac Pros. For multiple reasons, you haven’t shipped an update to those machines since. At some point you came to the conclusion that the 2013 Mac Pro concept was fundamentally flawed… [T]hat tight integration made it hard to update regularly. The idea that expansion could be handled almost entirely by external Thunderbolt peripherals sounded good on paper, but hasn’t panned out in practice. And the GPU design was a bad prediction. Apple bet on a dual-GPU design (multiple smaller GPUs, with “pro”-level performance coming from parallel processing) but the industry has gone largely in the other direction (machines with one big GPU).

It’s rather frustrating to see corporations declare, years after the fact, that things end users immediately called out as problems are actually, you know, problems. Heck, Apple’s workstation competitors have been mocking its design with salient points about the limitations of the trash can since the platform shipped four years ago, as captured in the ad below by Boxx:

I respect Apple for trying to build something new and unusual, I truly do. But the Mac Pro went too far in the wrong direction in its quest to establish itself as unique and different. A whisper-quiet workstation with high-end peripherals is a noble goal, but not if it fundamentally handicaps both the end-user and the corporation that designed it from upgrading the underlying platform.

While Apple has upgraded the Xeons inside the Mac Pro, we recommend against making a purchase until more information is available on what these chips can do. While clock speeds on modern chips have scarcely budged since 2013, certain capabilities, like AVX2, still may not be available. It depends on whether Apple stuck with Ivy Bridge-era Xeons (as I’m guessing they did) or actually updated to a more recent iteration of Intel’s Core architecture.

Apple’s new, “completely rethought” Mac Pro will be available next year, as will a new “Pro” display. Maybe by then, the (presumably) next-gen Oculus Rift will support it?

Source :

The Samsung Galaxy S8’s display is the best you can buy: DisplayMate

Published / by Mellanie, Wn.s,. / Leave a Comment

Each year, Samsung releases a new flagship display, and each year, DisplayMate writes an evaluation of it using more charts, figures, and data points than you find in your typical graduate thesis. Once again, the Galaxy S8 has gone under the proverbial knife, and once again it’s emerged triumphant.

The S8 sets a number of records, including:

The largest native color gamut, at 113% of the DCI-P3 motion picture standard and 142% of the sRGB / Rec. 709 standard that most 1080p content uses (whether screens are correctly calibrated to sRGB is an entirely different question).

It offers the highest peak brightness of any display, at 1,020 nits, though you can only hit this level with automatic brightness enabled (the smartphone will not allow you to manually specify this level if you disable that mode, but it should make the display easier to read in bright daylight).

Its screen reflectance is excellent, at 4.5%, though one display (unspecified) appears to have narrowly edged it, at 4.4%.

The color gamuts of the Galaxy S8. The red Adaptive gamut is the Samsung default. Image by DisplayMate

The Galaxy S8 offers four color gamuts: Basic (sRGB), Photo (Adobe RGB), Cinema (DCI-P3, often used for 4K), and Adaptive (default, wide gamut). Adaptive is the option Samsung has typically used to make its displays “pop” compared with iPhones. According to DisplayMate, the Galaxy S8 actually offers a new “deep red” OLED that didn’t exist in previous smartphones, and attributes the success of its Adaptive screen display mode at displaying color well to this new OLED.

Samsung and Apple have traded shots over the title of “best smartphone display” for years, though I believe DisplayMate has tended to give the reward to Samsung of late. But the fact that the most recent S8 receives DisplayMate’s first A+ award to-date also underscores how displays, like most aspects of smartphone technology, have improved to where it’s getting hard to find new areas to measurably excel, beyond typical improvements in power efficiency.

Speaking of power efficiency, here’s what DisplayMate has to say on that score:

Since 2013 the Display Power Efficiency of the Galaxy series of Smartphones has improved by a very impressive 56%. This year the new OLED materials on the Galaxy S8 have improved optical and power efficiency with its larger Native Color Gamut than on the Galaxy S7 (142% compared to 131% for sRGB / Rec.709).

While LCDs remain more power efficient for images with mostly full screen white content (like all text screens on a white background, for example), OLEDs are more power efficient for typical mixed image content because they are emissive displays so their power varies with the Average Picture Level (average Brightness) of the image content over the entire screen. For OLEDs, Black pixels and sub-pixels don’t use any power so screens with Black or dark backgrounds are very power efficient for OLEDs. For LCDs the display power is fixed and independent of image content. Currently, OLED displays are more power efficient than LCDs for Average Pictures Levels of 65 percent or less, and LCDs are more power efficient for Average Picture Levels above 65 percent. Since both technologies are continuing to improve their power efficiencies, the crossover will continue to change with time.

The Galaxy S8 also has 4 user adjustable Performance Modes and 3 adjustable Power Saving Modes that reduce the Display Power by lowering the screen Brightness and setting the background to Black, which can significantly reduce display power and more than double the running time on battery.

Here’s the data from DisplayMate:


Image by DisplayMate

Here’s how to interpret these results. The larger screen size (13.1-inch versus 11.1-inch) and higher brightness means that more absolute power is consumed by the Galaxy S8. Compared under normalized conditions, however, the two displays are equal both in terms of relative power efficiency at average display power and relative power efficiency. We tend to focus on display efficiency because it’s such a major driver of overall battery life these days, and Galaxy S7 owners have nothing to worry about if they’re stepping up to an S8, at least not as far as increased battery life drain from the display.

If rumors are true and Apple actually fields an OLED display for its upcoming iPhone 8, it would be the first time the two companies have gone head-to-head on the display market for years. That will be an interesting comparison to see, though we’d expect Samsung to have a potential advantage given its long history of working with OLEDs in mobile displays.

Source :

Imagination Technologies’ share price collapses as Apple dumps company

Published / by Mellanie, Wn.s,. / Leave a Comment

Ever since Apple launched the original iPhone, it’s worked closely with Imagination Technologies. Even after Apple began building its own GPU architecture, it relied on Imagination for other building blocks of its total solution. Apple, however, appears to be bringing that relationship to an end, and Imagination Technologies’ stock has cratered as a result.


Think less Chelyabinsk and more Tunguska.

On Friday, Imagination Technologies was trading at 268.75 pence sterling. As of this writing, it’s trading at ~102 pence sterling. That’s a cataclysmic drop for the company, and it’s driven by a letter IT posted earlier today. It reads, in part:

Imagination Technologies Group… has been notified by Apple Inc. (“Apple”), its largest customer, that Apple is of a view that it will no longer use the Group’s intellectual property in its new products in 15 months to two years’ time, and as such will not be eligible for royalty payments under the current license and royalty agreement…

Apple has not presented any evidence to substantiate its assertion that it will no longer require Imagination’s technology, without violating Imagination’s patents, intellectual property and confidential information. This evidence has been requested by Imagination but Apple has declined to provide it.

Further, Imagination believes that it would be extremely challenging to design a brand new GPU architecture from basics without infringing its intellectual property rights, accordingly Imagination does not accept Apple’s assertions.

Apple’s notification has led Imagination to discuss with Apple potential alternative commercial arrangements for the current license and royalty agreement.

As recently as last year, Apple was reportedly in talks to purchase Imagination Technologies, but those deals fell through. Instead, Apple hired a number of Imagination’s former employees and engineers, which may be part of why IT is taking a rather aggressive stance with its claims that it would be difficult for Apple to develop a non-infringing part. Part of what makes the situation muddy is that Apple may have already built its own custom GPU. Apple’s payments to Imagination Technologies totaled £60.7 through the year that ended in April 2016, and Apple is expected to pay roughly £65 million ($81 million USD) through April of 2017. That’s roughly half of Imagination Technologies’ revenue, which explains why the stock has taken such a beating.

Imagination Technologies’ statements about taking Apple to court aren’t an empty threat, but court proceedings aren’t a substitute for ongoing license revenue. Even if IT won a court case against Apple, the appeals and counter-suits could drag on for years. The judicial system isn’t designed to settle these kinds of disputes quickly, and IT isn’t in great shape right now. The company fired 350 people last year and announced it would seek to reduce operating costs after a slump in iPhone sales slugged its bottom line. To-date, Imagination Technologies has tried to protect its PowerVR, MIPS, and Ensigma product divisions from cuts, but that may no longer be possible. PowerVR has provided the bulk of its earnings for years, so we wouldn’t be surprised to see the other segments jettisoned first.

Source :

New patch significantly improves frame rates in Zelda: Breath of the Wild

Published / by Mellanie, Wn.s,. / Leave a Comment

Last Friday, Nintendo pushed a patch for the Legend of Zelda: Breath of the Wild that’s said to significantly improve performance when playing in both handheld and docked mode. The only actual patch notes are:

“Adjustments have been made to make for a more pleasant gaming experience.”

That’s a very short note for the improved experience you get. Kotaku took the game for a test-drive and reported significantly improved frame rates in Korok Forest, which normally had substantial dips. Jason Schreier tested the Switch in docked mode (important, since that mode reportedly had issues at the upscaled 900p resolution). Overall, he reports a smoother session, with occasional frame rate drops, but fewer dips into the mid 20s. Korok Forest and the various towns aren’t perfect, but there are fewer stutters and drops as shown in the video below:

Multiple people have mentioned that frame rates can drop badly when killing Moblins. Different readers attested to this issue, while others don’t seem to have a problem.

Wii U users don’t seem to have been left out in the cold, either. It’s harder to tell, since that platform doesn’t run the game very well in the first place, but a handful of people at Eurogamer and Neogaf both attested that the game seemed better on the Wii U as well.

The Switch continues to sell extremely well. Last week, 4Gamer reported that the Switch had doubled its sales in the Japanese market, from 49,913 to 78,441 for March 20-26. Media Create (via DualShockers) is even claiming that the Switch outsold the PS4 at this point in that console’s debut, moving 519,094 units its first month, compared with 439,810 PS4’s. It’s still too early to tell how well the Switch will be received, but early sales have been strong globally, despite the limited number of titles to play.

We’ve already discussed the difficulty of forecasting Switch sales based on its first few weeks, and the console could still take a downturn if Zelda doesn’t sustain its momentum. The faster Nintendo can bring new titles to market, the better-positioned its going to be when machines like Xbox Scorpio arrive. Nintendo is still talking about the Switch as a living room machine, and it definitely can be, but we’re still betting on a long-term handheld focus. If the Switch doesn’t end up replacing the 3DS, it’ll be surprising. If you’ve got a Switch or a Wii U and Breath of the Wild, let us know how it performs for you, and which modes (docked, handheld) benefited the most.

Source :

Next-generation DDR5 memory will double bandwidth compared with DDR4

Published / by Mellanie, Wn.s,. / Leave a Comment

Last week, JEDEC announced that the upcoming DDR5 memory standard will offer double the bandwidth and density of DDR4, improve channel efficiency, and offer better power efficiency as well. The new standard isn’t expected to be in-market until 2020 — JEDEC will finalize the standard next year, and announce new details at its Server Forum event in Santa Clara.

A new DDR5 memory standard for servers, desktops, and laptops would offer improved bandwidth (at least eventually), though memory latency still always jumps between generations and rarely much surpasses the original standard, even at high frequency. Higher bandwidth is always helpful in server workloads and some workstation applications, and desktops and laptops with integrated graphics always benefit from having more memory bandwidth to use for gaming or 3D applications.

JEDEC also announced that its NVDIMM-P specification standard is also moving along well. We’ve talked several times in the past about the hardware efforts to move NAND — flash memory — into DIMM sockets and data centers. Intel’s 3D XPoint memory, Optane, is also designed to fit in DIMM sockets, though the company hasn’t launched that version yet.


NVDIMM-P is a new NVDIMM standard and will take its place among the multiple variants already in-market. NVDIMM-N matches an equal amount of DRAM and NAND flash on the same DIMM. NVDIMM-F uses a small amount of DRAM to buffer a large amount of flash and is typically used as an alternative to a PCI Express SSD. These drives have lower latency and better responsiveness when placed on the DRAM bus instead of the PCIe interface. NVDIMM-P combines NAND and DRAM on a single chip and can interface with two different access mechanisms while using the existing DDR4 standard.

Back when DDR4 was new, we discussed some of the future technologies that might replace DRAM in a number of devices, including Hybrid Memory Cube, High Bandwidth Memory, and Samsung’s Wide I/O. Of the three, we’ve only seen HBM devices in-market, and those only on a relative handful of GPU models. That should change when AMD launches Vega later this year, but if JEDEC is planning for a DDR5, it obviously doesn’t think these alternate architectures are going to obviate the need for one in the near future. If the DDR5 standard is finished in 2018, it would likely launch in 2019-2020, but might not go mainstream until 2022-2023. Given how slowly computers evolve these days, it seems DRAM will be with us in one form or another for the foreseeable future.

Source :

Aukey Cortex 4K VR headset review: A lot of pixels at a low price

Published / by Mellanie, Wn.s,. / Leave a Comment

Not enough resolution is one of the most common complaints about the current crop of VR headsets. They’re all more or less roughly 1080p (aka 2K), split between the two lenses. So I was really interested when Aukey came out with the Cortex 4K VR headset at $399.99 — substantially below the Rift or Vive. However, while specs are one thing, real world performance is another. I’ve been taking an Aukey Cortex 4K through its paces. So far, my test results have been mixed.

Unboxing the Aukey Cortex 4K

The Aukey VR headset has a pretty typical design with detachable headphonesThe headset weighs in at 17.6 ounces — an ounce heavier than the Rift and a little over an ounce lighter than the Vive. I found it easy to use over my prescription glasses, which was a nice benefit. The unit is well put together, with side straps and a top strap. You can get it with matching over-ear headphones, that plug into audio jacks on the unit. You assemble those by routing the straps through them. The headset itself is about the same comfort level as a DK2, but not quite as nice as a production Rift or Vive. Some other reviewers complained about the hard-plastic nose piece, but in my case it didn’t actually sit on my nose at all, so it didn’t bother me.

Cable-wise, the Cortex 4K has an HDMI and a USB cable. The USB cable is apparently fine going into either USB 2 or USB 3 ports. Before you start using the headset, you need to download and install the Piplay software. It includes drivers for the device, as well as an intermediate layer that allows it to work with some Steam and Oculus-native titles, and access to the Piplay library of 3D and 360-degree content. Once I disconnected my Oculus headset, the Piplay software recognized the Cortex 4K immediately. It automatically prompted me to update its firmware, which was quick and painless. One quick tip is that for full functionality, you’ll want to launch Piplay as Administrator.

A selection of Steam games can also be downloaded and launched from Piplay but you can also use the SteamVR software

The unit has 1000Hz dual gyroscopes, and an acceptable 110-degree field of view. It can operate at up to 60fps (or up to 90fps in async mode). However, the gyros have an 18ms response, which may have contributed to the motion issues I felt while using it. It appears to be essentially equivalent to the Pimax 4K headset, and uses the same software. It can operate in Video mode (extended display), Direct Mode (more modern interface where applications can drive it directly), and Pimax mode (which is an enhanced version of Direct Mode that helps it do its emulation of other headsets).

If you need to access the support resources, you’ll find that many of them are in Chinese. Fortunately there is an active user community, so answers to many common questions can be found by searching, but Aukey clearly hasn’t invested much in polishing its user experience for the English-speaking market.

Aukey Cortex VR

For image quality, 4K matters, but it isn’t all that matters

The good news is that 4K resolution really does help eliminate the “screen door” (visible pixels) problem that is common with typical 2K VR headsets. However, the image on the Cortex isn’t as bright as the ones I’m used to seeing on a Rift or Vive. Image quality also seemed to fall off faster towards the edges, but that’s pretty subjective. In some cases, there was also a subtle vertical banding, but I couldn’t pin down what caused it to be visible in certain games or videos and not others.

The biggest single drawback of the Cortex 4K is that it doesn’t have full motion tracking. The internal sensors can follow rotation, but there is no tracking of left-right, forward-back, or up-down motion of your head. You can use your game controller to manually move yourself, but of course that isn’t the same thing as moving around and having your virtual presence move with you. That means the Cortex is best suited for 360-degree content (which I’m defining here as content that has a fixed perspective), including photos and videos.

Piplay's Pimax offers a library of games but the headset can also play many Oculus and Steam compatible titles

Entertainment, yes. Gaming, maybe not

The biggest problem I had using the Cortex 4K was an almost instant queasiness when starting a game. I had a hard time finishing more than a couple laps in Project Cars, for example. With an Oculus Rift, I can drive for 20 or 30 minutes without any real issues. Part of the problem seems to be lag. The headset is rated as 60fps, the same as a DK2, which doesn’t have this issue. But it definitely feels like it lags more, and movement seems little more jerky. The lack of full motion tracking probably also contributes.

This problem is pronounced for any kind of motion-based gaming. Now, I’m definitely on the wimp side of the VR gaming scale when it comes to motion issues, so for hard-core gamers there may not be a problem. But for most people, the Cortex 4K will probably be more useful for entertainment (360 videos and 3D movies) than for any kind of serious gaming.

You can launch many Oculus titles directly on the Cortex from Piplay

High marks for Steam and Oculus support, but 4K content is lacking

My testing of Steam and Oculus titles was definitely not conclusive or exhaustive, but most of the ones I tried seemed to run fine — with the limitation that you don’t have motion tracking or touch controllers, of course. I wanted to check out, as it features highly-immersive environments, but couldn’t get it to run with the Aukey. As I mentioned above in the content section, though, the native Oculus and Steam titles are aimed at 2K VR headsets, so don’t expect a big jump in image quality simply from looking at them on a 4K display.

Without question, the 4K image of the Cortex 4K helps eliminate the “screen door” effect you can see with typical 2K headsets. But that by itself doesn’t improve the apparent image resolution. While the headset does automatically upscale the image to 4K, it isn’t good enough to provide the quality of true 4K source material. Since the other major headsets are only 2K, almost all the online content aimed at VR users is currently only available in 2K (or less) resolution. There is apparently also a driver and HDMI spec issue that makes it difficult to get 4K experiences in full resolution to the headset. Aukey is actively working with Nvidia on solutions.

The device’s native PiPlay library’s videos are a worst-case example of the content issue. Most of the videos there are homemade and look to be less than 1080p. As a result, they’re unimpressive. The library’s collection of 3D movies is better, but they aren’t full 360-degree content, and only play in the older Video mode that treats the headset as an extended display. Video in Steam and Oculus apps is definitely less pixelated than when viewed through an Oculus or Vive, but certainly didn’t look like it was using the full 4K resolution natively. In racing games, for example, I had a better sense of detail in the distant background, but it wasn’t a dramatic difference.

8K is next for Pimax and Aukey. They're starting a Kickstarter for an 8K unit with motion tracking soon.

Is the Aukey Cortex 4K right for you?

In my opinion, gamers are better off saving up for a Rift or Vive with full motion tracking and touch controllers. Especially with the recent Rift price cuts, it isn’t that much more money. You’ll also get a more polished user experience. However, budget-conscious users who want to experiment with VR for entertainment and would like to get something more powerful than a Samsung Gear VR can save some money and get a higher-resolution experience with the Cortex 4K. Interestingly, the company is already teasing a Kickstarter for its Cortex 8K, which will also feature full motion tracking.

Source :