eComputerTips is reader-supported. When you buy through links on our site, we may earn a small commission without any additional cost to you.
What is PixelSense Display?
To put it into a few simple words, the PixelSense display is the touchscreen technology of Microsoft that was used in 2008 in a multitouch table top computer designed by them.
The special feature of this technology is that it understands and integrates hand gestures through optical recognition of an object placed on the screen.
- PixelSense display can operate on multi-touch and hand gestures and also has the ability of optical recognition.
- The unique technology allows multiple users to share any type of digital content and it will even support touches by real-world objects.
- This display is very easy to use that eases communication with its useful and cost effective features.
- It allows getting a job done fast and supports gaming with more security offered but is an expensive technology that has some functionality and lighting issues.
Understanding PixelSense Display
Microsoft PixelSense was formerly called Microsoft Surface. This is a computing platform that comes with a more interactive surface.
This unique technology and computing platform are beneficial for many reasons which include:
- It will allow one or more persons to use the screen at the same time
- It will allow touching real-world objects and
- It will allow sharing any form of a digital content most easily and quickly.
The announcement of the first version of such display was made on May 29, 2007, at the D5 Conference, and it was called the Microsoft Surface 1.0. However, it was available for the consumers to use at large only in 2008.
This first version of an end-to-end solution from Microsoft included a 4:3 rear projection display with a resolution of 1024 × 768 pixels. The design included:
- An integrated PC and
- Five IR cameras.
The cameras detected the moving objects and fingers at roughly 60 times per second on the display which was placed in a horizontal orientation.
This gave it a table-like look and allowed many people to see the screen from all sides and even touch it to interact or share the digital content from any angle.
When touched, the Surface platform starts processing. It first identifies the object that touches the screen using the IR cameras. These objects can be of three types such as:
- Blobs or
Sometimes the system uses the raw vision data that may be available for the applications.
Nonetheless, the device could easily recognize and optimize up to 52 multitouch points of contact at the same time.
That is all about the old version of PixelSense, or the Microsoft Surface 1.0 product, which was discontinued in 2011 by them anticipating the release of a new version, the Samsung SUR40.
This was typically designed for the Microsoft Surface 2.0 software platform.
Microsoft, however, partnered with Samsung and announced the new version of PixelSense in 2011 in CES or The Consumer Electronics show, the one that you find today as SUR40.
It came to use in 2012 in a 40-inch or 102 cm LED backlit LCD screen. Images were displayed at 16:9 aspect ratio and at 1920 × 1080 pixels resolution, more commonly known as FHD.
This is a far better PixelSense technology that does not use the IR cameras like the previous version.
This, in fact, helped in designing a product that is much thinner. This reduction in size of the product, literally from 22 inches to 4 inches, resulted in a few notable differences in its usage such as:
- It enabled placing the product horizontally
- It enhanced the capability of the product and
- It enabled mounting the product vertically.
The new technology and design in collusion helped in improving the ability of the product to identify fingers, tags, and blobs as well as utilize the available raw vision data more effectively.
The primary use of PixelSense was for commercial customers especially in public settings.
The PixelSense display or platform typically comprises more advanced hardware components and software programs.
The hardware components include:
- A 40-inch LCD screen
- A 4-inch unit depth for horizontal mounting
- At least a dual core processor
- An HD graphics processor
- At least 4 GB of DDR3 RAM
- A Hard drive
- Wired and wireless network connectivity options
- Different physical connectors
- A 64-bit Windows Pro operating system and
- Corning Gorilla Glass protection for the surface.
As for the software, it includes a Surface Shell that allows running and managing a set of relevant sub processes as well as control the hardware functions.
The strategic engineering and build facilitate in combining several functional aspects that includes and is not limited to:
- Multitouch and vision-based computer hardware
- Multi-user, 360-degree application design and
- The Windows software program.
This harmony in operation creates the most productive and effective Natural User Interface, or NUI. This recognizes and supports different hand motions such as:
- Touching an object to select it
- Dragging the selected object across the screen
- Scaling a selected object by dragging two ends of it closer or farther
- Turning an object by dragging two or more points in a circular motion and
- Flicking to set aside an object by simply swiping it.
For all these hand motions, the screen detects the momentum put for every action to select every object and determines how far it is accessible.
These objects are uniquely identified by the hardware and software of the computer based on three specific parameters such as:
- The shape of the object
- The size of the object and
- The tag patterns.
After this, a preprogrammed response is initiated by the computer by using its innovative architecture.
Simple as it may look overall, the structure of this technology involves four specific parts. These are:
- The screen: This is actually a diffuser that transforms the acrylic table top of the Surface into a large and horizontal multi-point touchscreen that can process different inputs from different users by the shapes or reading-coded tags.
- The infrared spectrum: This uses a LED light source with an 850-nanometer wavelength. This light points at the screen and is reflected back which is then picked up by the infrared cameras.
- The CPU: This is just like any ordinary desktop computer and uses wireless communication network with the device through the wi-fi and Bluetooth technology.
- The projector: This is a Digital Light Processing or DLP light engine as you may find in a DTV. This allows the cameras to read an infinite number of touches simultaneously but within the limits of the processor and its power.
There are four main attributes or components in a PixelSense interface. These are:
- Direct interaction- This refers to the ability the user to reach out to the screen and simply touch the interface to interact with the application. This eliminates the need of using a traditional mouse or keyboard.
- Multi-touch contact – This refers to the capability of the screen to respond to the touches due to its multiple contact points in the interface. In a traditional mouse there is only one-point touch through the cursor.
- Multi-user experience – This refers to the benefit of multi-touch ability which allows many people to face the screen from different sides of it to interact with the app at the same time.
- Object recognition – This refers to the ability of the device to distinguish the orientation and presence of the tagged objects that are placed on top of the screen.
The most significant aspect of this PixelSense technology is that it allows placing any type of non-digital objects and use them as an input device, provided it is supported by the technology itself.
This is due to the fact that the modern PixelSense displays do not use the cameras anymore. Therefore, it is free from any restrictive properties of an orthodox touchscreen such as:
- The electrical resistance
- The capacitance or
- The temperature of the object used.
These useful characteristic features of the PixelSense display replaced the GUI or Graphical User Interface, that was so popular in the 1980s and 1990s, by the NUI or Natural User Interface.
It is used in several industry verticals which includes and is not limited to:
- Financial services
- Media and entertainment
- Education and
This technology is also very useful for basic applications such as music, photos, virtual concierge, and even games to customize them all for the end-users.
As of now, the PixelSense technology is available for sale and use in many countries.
With the ever-growing popularity of this technology, it is not far when there will be no computer on a desktop.
In fact, each desktop will itself be a computer. The computer scientists are putting in their best efforts to integrate this technology to create intelligent surfaces all around people to make their lives better.
Different form factors are expected to evolve in the near future where surface computing will become a norm in any sort of environment, be it in schools or in offices, at public places or even in your kitchen!
Microsoft PixelSense is the future of computers and it will certainly break down the age-old barrier between technology and mankind.
It will present things in a new way where the display itself will create a lot of newer and better possibilities.
1. Easy to use
It is easy to use a PixelSense display because it does not need using any traditional computer keyboard or mouse, USB ports or wires. People can interact with the product simply by touching the screen or by placing any object on it.
2. Useful and cost effective
Users also do not need to have any prior training or foreknowledge to operate the PixelSense display thereby making it more useful and cost effective.
3. Multiple user benefit
Since it does not need any additional hardware or input device to interact with the PixelSense display, multiple users can access and use the screen at the same time to interact with the images or objects giving them a real-world experience.
4. Ease in communication
The enhanced capability of the display will produce better results and in effect will help in communicating much smoothly.
No matter whatever object you use, the system will communicate it back to the app and react to it within a fraction of a second.
Being able to differentiate between your hand and that of someone else, this technology will add a new layer of security to the systems.
Leave aside your hand, it can even read the object that you place on the screen and if it is not preprogrammed, it will not allow accessing the device.
However, much of this functionality will depend on the resolution of the camera.
6. Enhanced gaming experience
Now you can play games with multiple players using the same screen but from different sides.
This will make things more interesting and impactful, without having to worry about damaging or breaking the screen, even when kids are at play, thanks to the Gorilla Glass protection.
7. A leap forward
Coming out of the second-generation technology, which was expensive, the third generation will be more practical and even more affordable.
Generation three will be a huge leap forward with more speed and resolution with lower response time and lags.
8. Get jobs done fast
Using this technology, you can perform your computing tasks most easily, efficiently and instantly.
Whether it is writing a text or sharing a content, downloading or uploading a photo or a video, you will not be kept waiting.
9. Control of technology
Since everything will be at your fingertips, literally speaking, you will have better control and use of technology.
You can use it for jobs like manipulating or editing photos or even order your dinner from the local restaurant.
10. Time saving
Every job on the computer will be done fast since there will be a lot of simple or painstaking processes eliminated.
This will save a lot of time and increase your productivity in turn.
11. Modify input interfaces
If it is supported, you can even use different software to modify different input interfaces most creatively since there is no need to use any specific switches or physical buttons.
Each of these input interfaces can also be usage and operation specific such as rotation, zoom-in, zoom-out and more.
12. Space saving
Space comes at a premium nowadays and use of a PixelSense display will save a lot of space, and therefore your money, because the display as well as the input space are both integrated.
You can also be more creative as a device maker since this offers a lot more flexibility when it comes to its design.
13. Easy maintenance
Since there will be no physical switches or buttons to operate, there will be no chance for dust, moisture, and dirt accumulating in between the gaps between the switches.
This will minimize the need for maintenance, and even replacements of switches.
14. Complex technology
Though it has nothing to do with the end-user, the complex technology and structure involved in a PixelSense display can be a bit of a concern for the device makers as well as professionals for its maintenance.
15. Functionality issues
There may be some issues with its functionality and response of the screen because the hardware has got this nasty propensity to get out of alignment.
This means that a specific button that was on the screen before when you used it may now be somewhere else and far apart when you want to use it again to activate it. You may put your finger on something else.
16. Portability and cost
The portability of this screen is very low and the cost of it is very high. As of now, it seems that a lot needs to be done to make it more affordable for the general public.
17. Lighting issues
In order to operate the PixelSense display properly, you will need a dimly lit setting. This is because the screen may seem to be washed out under bright lights.
18. Accuracy matters
You will need to be accurate while operating this display and selecting an object or menu. Fat fingers will not be as precise and efficient using this screen as they will be while using a mouse or even a stylus for that matter.
19. Tagging is required
Simply touching an object will not suffice your needs. All the objects are required to be tagged for efficient operation and to enjoy the benefits of a PixelSense display.
These screens, being very delicate and sophisticated, are prone to damage due to hard touches and nail-scratches since you will be touching the display directly.
This may lead to malfunctioning of the screen in a few particular cases. Moreover, if dust and dirt is allowed to sit on it, the display or the images on it may not be clearly visible.
21. The click feeling factor
Typically, a mouse or a push button can be determined whether it is operating or not by simply hearing the clicking sound.
Since there is nothing of these sorts in a PixelSense display, you will not feel it and therefore your operation may be a bit clumsy at times.
However, there are a few touch screens that offer this click feeling when you touch it.
22. Not for all
A touchscreen is not designed for all to use, especially those who are visually impaired.
As of now, innovations in this specific area are still required which will bring in some inventive ways to let them know where it is to be touched.
Physical buttons, if not sound navigation, or both, may be a lot helpful for them in operating a touch screen.
PixelSense display is truly a revolutionary technology and it is here to stay.
It will change the way people interact not only with computers but several devices that they will have access to in the near future.