Technology News

Technology News

A New Way to Experience Daydream and Capture Memories in VR

A New Way to Experience Daydream and Capture Memories in VR

A New Way of Capturing Memories: Google’s Daydream

Google’s main aim Daydream with VR is to make it accessible and useful to all, right now only a few understand what’s VR and how they can use it. Google is trying to get people on board with the whole VR experience by partnering up with smartphone developers to get VR onto people’s phones and getting it on to gadgets that allow people to capture their memories much like a camera but only more immersive.

One of Google’s Attempts at making the VR world more accessible is through the Google’s Daydream platform which basically simplifies access to VR content on smartphones. Google has even made Daydream compatible headsets for VR viewing.  There is even a Daydream Home that allows you to access various VR Apps and videos.Currently, there are not a horde of VR supportable videos, like on YouTube and many people don’t really understand what VR has to offer other than in the gaming world. Google plans on changing current perception by bringing out various Google collab Daydream gadgets.

Google’s Daydream and Lenovo Mirage Solo:

As the name suggests you will be flying solo on this gadget, in other words you don’t need a smartphone to use this gadget. The Lenovo Mirage Solo combines the best in having a smartphone VR, it has ease of use and portability.

What’s even more great about this device is that it is comfortable to use with Google’s new WorldSense tech and it gives you a more immersive VR experience. The Lenovo Mirage Solo comes with simultaneous localization mapping or SLAM for short, that basically gives you computer- like position tracking without the need for any extra sensors.

The gadget is even great for gaming because of its immersive VR experience and it’s great tracking capabilities. Using the Lenovo Mirage Solo on Google’s Daydream would just add to the experience.

The Lenovo Mirage Solo will have access to Google’s Daydream apps and with YouTube VR , you can enjoy the VR experience even more. What’s more is that Google is working closely with developers to bring new experiences to the Daydream platform that will take advantage of new tech like the Lenovo Mirage Solo.

Google’s Daydream and VR 180 cameras:

Everyone these days wants to capture their memories and share it with the near and dear ones, Google is trying to make this experience an even more memorable one by introducing the VR 180 cameras.

With these VR 180 cameras you can shoot everything in 3D with 4K resolution and get a more I-was-there-experience.

You can then browse through all the memories you captured on VR headsets like Google’s Daydream view and Cardboard or for a more accessible device you can even use your smartphone.

With these memories you can share them over platforms like YouTube or even watch them on your desktop or phones. You could even access Google’s Daydream with these VR 180 cameras.

Google’s Daydream Compatible VR 180 cameras coming out this year:

Different VR cameras will have different features apart from accessing Google’s Day dream, some will even offer live video streaming so you can share your moments with people over real time.

From the many VR 180 cameras due to be released this year, the Lenovo Mirage Solo camera and the YI Technology’s YI Horizon VR 180 camera will come out in the second quarter of this year.

For a more professional approach the Z Cam K1Pro was recently released.

read more
Technology News

My Special Aflac Duck Helps Kids Living with Cancer

My Special Aflac Duck Helps Kids Living with Cancer

The New Care Giver: My Special Aflac Duck

At the CES held recently in Las Vegas, saw many companies showcase the new and innovative gadgets. Although most of these products may not see the light of day, the CES is still a place where tech companies can get a feel of the market for their gadgets. So it was surprising to see an insurance company as one among the many tech companies at the CES to showcase their gadget, the “My Special Aflac Duck”.

What’s more, the My Special Aflac Duck won the best unexpected product at the CES. All in all this endeavor of the insurance company was well received by all those present at the CES.

More about the My Special Aflac Duck:

Aflac is a company that deals in voluntary insurance. This social robot developed in collaboration with Sproutel, is a companion to a child suffering from cancer. The My Special Aflac Duck was developed in order to make health care more playful and less stressful for the wee ones.

Taking a year to develop the My Special Aflac Duck  and with four patents to back it up the My Special Aflac Duck is all set to offer care and comfort to the small cancer patients.

The My Special Aflac Duck is focused on bringing comfort to children suffering from cancer. With the My Special Aflac Duck’s naturalistic movements and interactive tech, the My Special Aflac Duck is a good comfort factor for children. The My Special Aflac Duck also comes with an app that allows children to use augmented reality in which the My Special Aflac Duck is seen to mirror the child’s life in the form of feeding and even enduring the same painful therapy as the child.

The My Special Aflac Duck also imitates the child’s moods, dances, quacks and also cuddles up to the child when they need it the most. Aflac and Sproutel are all ready to deliver the product to the center at Atlanta later in this year for further testing and are all set to launch the product nation- wide in 2018-19. Backed by four patents and 1 hard year of research, the My Special Aflac Duck has no wonder taken up the hearts and minds of all those present at the CES.

The brains behind the My Special Aflac Duck:

Aflac is the leading voluntary insurance provider in the US that gives payouts quickly when a patient falls sick. These payouts are normally within the same day as when the patient needs the money. The company has taken the minds of family from financial strain to healthcare in their most trying time. The company has also taken root in Japan by being one of the leading providers of insurance there. They provide insurance to one in every 4 households in Japan.  Aflac is the short form of American Family Life Assurance of Columbus.

Sproutel on the other hand is a research and development workshop who’s main focus is developing products that can provide emotional support to patients in need. It has collaborated with many non- for profit organizations and companies alike to come out with products such as the “My Special Aflac Duck” and “Jerry the Bear.”

read more
Technology News

Single Metalens Focuses All Colors of the Rainbow in One Point

Single Metalens Focuses All Colors of the Rainbow in One Point

Metalens could Revolutionize AR and VR Technology

Currently, the lenses in use today, whether it be in a camera or otherwise use a number of lenses in order to correct what is known as a chromatic aberration. Coming to chromatic aberration later on, this use of multiple lenses causes the instrument or gadget it is used in to be bulky and complex in design.  Recently all that is changed, with researches having created a single metalens that is capable of overcoming chromatic aberration thereby resulting in a more compact and simple designed device. Having created a metalens, opens up various possibilities too in the VR and AR world.

What is a Metalens?

Metalens is a flat surface with nanostructures that focus all colors in a visible spectrum including white light into a single point.

Previously metalenses could only be used to focus a limited range of colors and the only other option was curved lenses that needed to be used in numbers just to overcome chromatic aberration resulting in a very bulky product.

Using multiple layers of curved lenses was necessary because of chromatic aberration. Different color’s wavelengths travel at different speeds through different materials, for example a red wavelength passes through glass faster than what a blue wavelength would. These colors would then reach a point at different times causing a different foci. This difference in foci is called a chromatic aberration.

So currently, in order that this problem of chromatic aberration is solved, a number of curved lenses made of different thickness and materials have been used that is until metalenses.

Advantages of a Metalens over traditional lenses:

Metalenses are made from materials that are cost effective, they are easy to make and are thin compared to today’s traditional lenses.

How has a Metalens taken care of the problem of Chromatic Abberation?

The research team has used arrays of titanium dioxide nanofins to create a metalens which also eliminates the problem of chromatic aberration. Researchers have used units of paired titanium dioxide nanofins to control the speed of various wavelengths simultaneously.

Another advantage of using paired nanofins is that they also control the refractive index on the metalens which creates time delays of the wavelengths passing through the different fins. This process ensures that all colors reach at the same time and at the same place.

Major problems to overcome in making a Metalens:

Researchers had to ensure that all the colors of the spectrum was focused at a single point and that they reached this point at the same time from different points on the metalens.

By joining two nanofins into one element, researchers were able to bring all the colors of a spectrum to focus on one point and all come at the same time to that one focal point.

This breakthrough invention will greatly reduce the thickness and complexity in design when it comes to optical devices such as cameras. This also opens up the AR and VR world of tech.

Researchers are now planning on developing a bigger 1 cm diameter metalens.

read more
Technology News

Chinese Tesla Rival Byton Unveils ‘Smartphone on Wheels’

Chinese Tesla Rival Byton Unveils ‘Smartphone on Wheels’

Byton is a Chinese based company that unveiled its electric car at the Consumer Electronic Show. Byton will be launching its electric car, the futuristic smart car in 2019. The smart car was created by former engineers from BMW and Apple. Byton actually comes from the phrase, ‘Bytes on Wheels’

Byton has been equipped with smart technology that will enable autonomous driving. It provides for interaction of the driver and passengers with the smart display panel present in the car. This can be done via touch, voice control or gestures.

The car will go into production in 2019 and will be available in the US and Europe in 2020. It will be selling at a price of $45,000.

This car has been pitted against the smart car produced by Tesla’s Model 3, which has been selling for $35,000. Tesla has 8,496 super charging points globally, while Byton has none. It is seeking to get their cars to be charged at the Tesla charging points. Tesla is willing to accommodate Byton at a cost.


On the inside of the Byton smart car, there is a huge touchscreen that occupies the space of the dashboard. This display panel stretches from end to end. This display panel will serve as an active interface for the driver and passengers. During the day, the display is bright white and at night it tones down to cause less distraction. The display can show video clips, calendars, health data and much more.

When the Byton smart car is in fully self-driving mode or stationary, the front seats can rotate inwards at an angle of 12 degrees to provide a comfortable position for interaction.

An 8 inch tablet has been embedded in the steering wheel.

The Main display unit and the tablet can be controlled by a combination of facial recognition, voice activation and gestures. The gestures will be read by cameras that are located below the center of the dashboard. The cameras can determine who you are, upload your profile, adjust the driver’s seat and other customized actions.

The Byton smart car is also equipped with screens on the back of each front seat to entertain the extra passengers.

In its production version, Byton is promising only up to Level 3 capabilities, which is somewhere between a car with a driver and fully autonomous driving. It actually means that the car can drive itself , but the driver has to be alert and be in control of the situation at all  times. After 2020, they will raise it up to Level 4 autonomy, whereby the driver can actually go to sleep.

read more
Technology News

A Nonaddictive Opioid Painkiller with No Side Effects

A Nonaddictive Opioid Painkiller with No Side Effects

Opioid Painkiller, Which Isn’t Addictive And Have No Side Effects

Painkillers are additive and people usually end up having it even when it has done its job. Having painkillers when there is no need usually results in unwanted side effects which have dramatic effect on the health of the person. It is very likely to develop an addiction for an Opioid painkiller but scientists has come up with a new kind of painkiller which is addictive not does it have extreme side effects.

This new Opioid painkiller has been discovered and developed by an international team of scientists who came across the crystal structure of the kappa Opioid receptor.

It is worth noting that this particular substance is extremely vital in providing the relief against the pain and works like charm of human brain cells to lessen the pain drastically.

The international of scientists consisted of 24 scientists led by researcher from University of North Carolina along with three scientists from USC Michelson Center who had made their name with their research on the special receptors exclusively found on the neuron.

Another important discovery made by the researchers in their endeavor is that this particular Opioid based compound only ends up in activating the kappa Opioid receptor which ensures that there is no risk of addition among the users and certainly it will not have any dangerous side effects surrounding it.

This finding has already been published in a reputed journal called Cell which give insights and details the discovery and development of this amazing painkiller.

Challenge To Bring New Opioid Painkiller into Mainstream Healthcare

In the drug research and development arena scientists face a major challenge of coming with reliable new alternatives towards easing out pain but at the same time keeping down side effects as low as possible.

But using Opioid presents the danger of giving rise to the addiction which is quite rampant with the use of Opioid painkillers. Current data shared by National Institutes of Health shows that more than 1 in 10 American citizens suffers from chronic pain which require administration of painkiller to relive the pain and this has resulted in the addiction to the Opioid painkillers for millions at the same time.

The Science behind the New Opioid Painkillers

It is worth noting that the G protein filled receptors found right on the surface of the membrane is known to act as the communication gatekeeper within cells.

The solution towards bringing relief to the pain is to get in touch with the structure of the receptors and allow it to interact with the right kind of drug compound with limited side effects.

Usually scientists move towards determining the structure of the receptors through forcing the proteins compound right into a crystal lattice which gets exposed to X-rays later on.

This group of scientists actively tried to come up with a perfect model of receptor to understand when it does and doesn’t interact with the drug compound.

There research has also helped in generating a greater understanding of the tecpetor and their behavior and the new Opioid painkiller is based on this very knowledge.

read more
SoftwareTechnology News

Tacotron 2: Generating Human Like Speech from Text

Tacotron 2: Generating Human Like Speech from Text

Tacotron 2: Google simplifies the process of teaching AI how to speak like human

Developing the perfect language translation tool has been a difficult challenge for the scientists, researchers, entrepreneurs and others alike for quite some time. Google has made some lead with its translation services but when it comes to AI based speaking almost everyone comes out as robotic voice which can be easily differentiated from a human voice. Google has developed a new method which will aid developers in training neural network to produce realistic speed with Tacotron 2.

This Tacotron 2 will help in bringing realistic kind of speech to the translators through analysing the text and even without the need of any grammatical expertise in the said language. This method is making use of two different speech generation projects namely the original Tacoron and the WaveNet.

The earlier voice generators

During the early days WaveNet left everyone speechless by offering eerily convincing speech but with one audio sample at a time. This was a great achievement but it wasn’t effective when it came to language translation or text to speech translation objectives. In order to get into the world of voice generation WaveNet had to accumulate a great range of metadata relating to the every aspect of the language from the right pronunciation to the key linguistic features. But this problem was overcome with the launch of the first Tacotron which helped in getting high end linguistic feature absolutely but it wasn’t the best enough to be used directly in the speech products of the time.

How Tacotron 2 works?

The Tacotron 2 makes of both the pieces of text and narration in order to get to the right way that particular language is spoken by the natives. In short it specifically calculates all the linguistic rules which might apply to the given text in order to render a human like voice. In essence this method helps in converting the text into very peculiar Tacotron style mel-scale spectrogram which helps in identifying the rhythm and emphasis. It shouldn’t come as surprise that the words are formed using this WaveNet style system for better realistic appeal. The resulting speech with Tacotron 2 is extremely convincing as a realistic human voice but it always happens to be quiet chipper than the usual.

Overcoming the challenges with Tacotron 2

This technique of Tacotron 2 still have a number of shortcoming to overcome before it renders human like voice for language translation in the future. Researchers have stated that this technique faces immense difficulty when it comes to pronouncing difficult words like decorum and merlot. In some cases it freaks out by generating some really eerie and strange noise. The second major shortcoming with this technique is that it isn’t able to translate and generate voice in real-time. Thirdly researchers can’t even control the tone of the generated voice or compel it sound happy or sad. But rigorous research is being undertaken by the team to overcome these challenges on the Tacotron 2 and bring the worlds first realistic voice generator to the masses.

read more
Technology News

Light Pollution: The Unfortunate Side Effect of LED Lighting

Light Pollution: The Unfortunate Side Effect of LED Lighting

LED light is harmful for you in a way you can’t think of

Since the time we had invented LED lights we had brought it everywhere from homes to hoarding and what not. But a recent study has shown that it has resulted in a notorious  Light Pollution which we can’t simply oversee. LED amassed huge popularity among the consumers worldwide due to its being energy efficient which helped in cutting down the energy costs. This meant that people starting placing more light at the same cost thereby turning world brighter from ever before in the history of the mankind.

LED allowed us to illuminate the world without ever thinking twice about its impact on the environment. A unique research findings published in the Journal called Science Advances had brought a number of issues which are surfaced recently due to global rise of in the illumination of our surrounding, This study has also shown that the artificial lightings on the outdoor surfaces has grown at a rapid pace of 2.2 percent which is very alarming concerning the fact that it happens within a span of just four years from 2012 to 2016. More and more people globally are using light at night without thinking twice about the environment

We are lighting up almost everything needlessly

This research showcases that the growth in lighting has occurred on global scale in a similar fashion from the South America to Asia and Africa. There are few exceptions but data analyzed from the unique radiometer mounted on top of satellite has helped in understanding the nighttime lights on a scarier proportions. It was found that we had developed a knack for lighting up almost everything whether it is required or not like lightning up the bicycle pathway or sections of highways which wasn’t was lit in the past.

This research has also shown that the light pollution wasn’t seen at all in the worn torn regions like Syria and Yemen but some of the countries like Spain, US and Netherlands were right ahead in enjoying the brightest nights. This report has also forecasted that the artificial light emission will continue to increase in the upcoming days.

A noble cause turned into a creepy reality

Just a decade back given the rise in the global temperature and incessant burning of the fossil fuel  led to the creation of LED lights. A massive campaign was launched on global scale towards the adopting of the LED lights due to the environmental concerns but we had new light related problems for ourselves unknowingly. Now we have light pollution to deal with where everything is lit regardless of need or utility but simply the fact that we can afford to lit it. This research elaborately discusses how the light affects human body clocks and seriously damages the sleeping patterns. Excessive light is even blamed for lack of sleep in this modern society which further makes us susceptible to a number of health problems such as high blood pressure, depression and diabetes.

read more
Technology News

Dolby Vision vs HDR10: Which is best?

Dolby Vision vs HDR10: Which is best?

What is HDR on TV? Which is better HDR 10 or Dolby Vision?

HDR is one of the buzzwords when we talk about televisions in this 2017. We’ll see what it is and if it’s worth it. We will also analyze the characteristics of the two most popular HDR standards, the HDR 10 and the Dolby Vision.


The High Dynamic Range (HDR) technology or high dynamic range is a concept that will be very familiar to photography fans. Basically this technology allows you to see at once and in detail dark areas and clear areas. This is achieved with darker blacks and more luminous targets, without interfering with each other.

If the 4K has brought us more pixels, the HDR tries to improve the quality of them.


Currently there are two technologies that are disputed to become the HDR standard in the world of televisions: the HDR 10 and the Dolby Vision.


HDR 10

The HDR10 is an open and free standard by which manufacturers do not have to pay any license for its use.

DOLBY VISIONDolby Vision is a proprietary standard of Dolby laboratories. It is a more demanding standard and that in theory should give a higher quality than the HDR10. We say in theory because the image quality at the end depends on many more factors, but the technology itself is superior.


Then we will see a table that summarizes the main differences, below we will see the implication of each one:

Dolby Vision

                                               HDR10                                   DV

Depth of color        Excellent                               Good

Maximum brightness   Excellent                         Excellent

Tonal Mapping Depends on the manufacturer More consistent        (better)

Dynamic                                                         Static                                       Metadata (better)

Availability of equipment                             (TV, Blu-ray, consoles, etc.) Very good Limited

Content availability                                       Good and growing rapidly     Limited

Depth of color

The HDR10 uses a 10-bit color depth, making it possible to represent 1,073 million colors. This number comes out of the following calculation.

Since each bit can have two values, the total number of different values (colors) in 10 bits is:

2^10= 1024

with 10 bits we can represent 1024 colors, as each pixel is formed from 3 primary colors (RGB- red, green, blue – red, green and blue) the total number of possible colors will be:

1024 ^ {3} = 1.073.741.824 = 1.073 \: millions

On the other hand, the Dolby Vision standard has a color depth of 12 bits. That if we calculate in a similar way to the previous case, we obtain that we can represent more than 68 billion colors.

To get an idea, the normal content we are used to seeing has 16.7 million colors (8 bits of depth of color). The jump to the HDR supposes to see transitions and graduations of color much more smooth.

Dolby Vision vs HDR10

Maximum Brightness

One of the parameters that gives more quality to the image is the contrast, the difference of light enters the illuminated areas and the dark areas. For this the HDR standards define a level of brightness that has to be achieved by the manufacturers that want to obtain the certifications.

The HDR 10 standard specifies a maximum brightness that can vary between 1,000 and 4,000 candelas per square meter (cd / m2 – it is a measure of luminance). However, Dolby Vision requires that the maximum brightness is always 4,000 cd / m2.

Both standards are prepared to reach a maximum brightness of 10,000 cd / m2, but at the moment there is no television on the market that can offer, or even if you want to get close to those brightness levels.

In this section both standards offer very similar capabilities.

Tonal mapping

The tonal mapping is a process that consists of enriching the tones of an image, so that it allows us to see tones that the television is not capable of producing.

For example, a TV that has a maximum brightness level of 1,000 cd / m2 is playing a movie of 4,000 cd / m2. How does it represent the images of more than 1,000 cd / m2 that the TV is not able to reach?

It does not reproduce them, and everything that exceeds 1,000 cd / m2 loses detail. In photography it is said that that part of the photograph has been burned.

Modify the images; and makes the 4,000 cd / m2 parts are represented with 1,000 cd / m2. And the intermediates (from 1,000 to 4,000 cd / m2) are represented by images of less than 1,000 cd / m2. This is the so-called tonal mapping.

For tonal mapping both standards use a PQ transfer function. However, the Dolby Visual system uses its own chip, which adjusts the tonal mapping according to the limitations (maximum brightness) of the television. In the HDR10, the tonal mapping is configured entirely by the manufacturer, which can cause inconsistencies between one television model and another (that a film looks different depending on the TV on which it is seen).


Metadata is extra data that describes other data. In video, metadata is used to describe the format and content of the video. For example, in the metadata, the levels of illumination of the scene can be described, so that the television uses them to represent the image in the most similar way to that specified by the film director.

The HDR 10 format only uses static metadata. This means that only the metadata are consulted at the beginning of the video playback. The Dolby Visual (DV), meanwhile, uses dynamic metadata, which allow different metadata to be used for each image.

This is an important advantage of the Dolby Visual. Because you should see that when viewing a movie in HDR10 the metadata specifies that the brightness during the movie will vary between 0 and 1,000 cd / m2. If there is a very dark scene, in which nothing surpasses, for example 50 cd / m2, only 5% of the color depth can be used. Since 1,073 million colors are distributed from 0 to 1,000 cd / m2. The scene will be very poor visually and many details of the image will be lost.

However, with the DV and dynamic metadata, the display parameters, such as minimum and maximum brightness, can be set scene by scene.

It is possible that in the future the HDR10 incorporated dynamic metadata. In addition, all HDR10 televisions on the market will now be compatible with a firmware update. [The HDR10 format has been renewed and already has dynamic metadata, the new standard is called HDR10 +]

The HDR 10 format is by far the most popular. Large manufacturers such as Samsung, Sony, LG, Panasonic or Hisense already have models in the market with this technology.

On the contrary, the Dolby Vision system is still very little widespread and only some manufacturers like LG or Vizio have bet on it. Some models that already have Dolby Vision (such as LG OLED and LED UHD, Vizio series R, P and M, LeEco uMax 85 …)

Consoles and PCs

Both the Xbox One S and the PS4 Pro have support for HDR10. Some games like Forza Horizon 3 and Gears of War 4 already take advantage of this technology.

However, the Dolby Vision will not reach those consoles and will have to be the new generation that implements it.

Yes it will reach the PC by the Mass Effect: Adromeda, which will be one of the first triple A games to reach the market supporting Dolby Vision technology.

HDR players

Most Blu-ray HDR players (Samsung, Panasonic, Sony, etc.) support the HDR10 standard.

The new Google ChromeCast supports the Dolby Vision standard. So for now it’s the cheapest compatible Dolby Vision player you can buy.

HDR content

If to really take advantage of a 4K TV you need 4K content, to take advantage of an HDR TV you need the content to be in HDR. That’s why many streaming platforms such as Netflix or Amazon Prime Video already have HDR content recorded natively.

Sources for HDR10: Netflix, Amazon Prime Video and YouTube among others

Sources for Dolby Vision: Netflix, Amazon Video and VUDU. Some examples of current movies with this technology are Star Trek Beyond and Batman Vs. Superman


In recent years the image quality of television has improved a lot. Therefore, several standards have emerged that try to take advantage of this increase in quality and that aim to make a movie (video game, series, etc.) look the same on two different televisions.

On paper the DV (Dolby Visual) technology is more powerful and allows to achieve better image quality, but nevertheless few television models implement it at the moment.

The difference between both formats is not as important as it seems, the important thing is the quality of the TV. A good TV with HDR10, which has more brightness, better contrast, etc. It will be much better than a Dolby Vision television whose specifications are worse.

In fact, both formats are limited by televisions currently. Since the televisions are not able to take advantage of neither of the two standards completely. In fact it will take years until they do.

The best thing is that you see the quality of each television beyond looking at the certifications you have, because you have never been or will never be a guarantee of buying the best product, or the one that best suits you.





read more
Technology News

3D print wireless sensors can Connect Objects

3D print wireless sensors can Connect Objects

Now 3D print wireless sensors objects can connect with other devices with Wi-Fi

If you think of world where every electronic device is connected with one but without any electronic entanglements then it isn’t farfetched. A team of scientists has been successful in creating such 3D print wireless sensors in plastic form boasting unique 3D print wireless sensors which don’t really need electronic for Wi-Fi to connect with other electronic devices. In other words these 3D printed objects don’t have any additional electronic component onboard which helps in establishing connection with the Wi-Fi. On the contrary they can easily sense can sense the respective need of the user and can connect with the Wi-Fi to do the needful without much of human intervention.

3D print wireless sensors Making fiction a reality

The way these sensors work seems to be plucked right out of the science fiction flick. It can allow bottle of detergents to sense whether they are full or running low and then it can connect online on its own to place an order. The researchers from the University of Washington had become the very first team to come up with 3D print wireless sensors in plastic objects which has the ability to collect data and communicate with other devices on Wi-Fi automatically.

Creating smart 3D print wireless sensors objects

Their goal was to allow millions of users worldwide to create smart objects right out of their 3D print wireless sensors without mush hassle. But the major challenge faced by them to create such 3D print wireless sensors which allows such objects to communicate wirelessly without even having any electrical components but just a plastic body.

In order to build such smart 3D printed objects which can communicate with other devices as well as commercial Wi-Fi receivers this team came up with a noble solution. They made us of the backscatter techniques which allows device to exchange information with one another. This was achieved by replacing some of the function which are achieved through electrical components right to the mechanical motions. These motions are specifically achieved by the use of gears, springs, switches and other intricate parts and of it can be 3D printed and can be backed with the 3D print wireless sensors to get the desired tasks performed.

Developing intelligent tools and widgets

This team of researchers has already 3 –D printed a number of different tools equipped with 3D print wireless sensors which can sense and send information to other devices. The devices printed by the team are a scale, wind meter and a water flow meter. The most unique device had been the water flow meter which was effectively utilized to track as well as order laundry soaps successfully in varied tests.

Apart from making these tools researchers had also created some 3D printed Wi-Fi input widgets like knobs, sliders and buttons which can be easily customized just like the tools. It can be designed to communicate with other devices over the Wi-Fi and it can be easily integrated within the ‘Internet-of-Things’ or so called smart devices. This will eventually help in creating a seamless connectivity between varied smart devices and enhance the experience of the users.

read more
Technology News

Quantum Internet goes Hybrid

Quantum Internet goes Hybrid

Scientists turns quantum internet into a hybrid information network

A recent study published in the Nature magazine shows that the scientists had been successful in creating a one of a kind hybrid quantum network (quantum internet) link. The sudden surge in the development and evolution of the quantum information network is seen as new age technology which will leave everything behind by offering completely new capabilities in the field of communication and information processing. Coming up with a hybrid solution simply enhances the capability of such network to a whole new extent with immense future prospects. The team behind this remarkable research comprises of Nicholas Maring, Dr. Kutlu Kutler, Dr. Magherita Mazzera,Dr. Georg Heinze Pau Farrera from ICFO being led by Prof. Hugues de Riedmatten from ICREA.

What is a quantum internet information network?

A quantum information network or quantum internet is made up of quantum nodes which essentially stores and processes the information. These nodes are usually made up of matter systems which include doped solids, cold atomic gases and abundance of communicating particles in the form of photons. The major issue faced by the scientists is to find the right matter system which can help in providing optimum transfer of information. That’s why scientists have implemented the hybrid network wherein they were able to combine the best of the features from different material systems.

A number of earlier researches have shown that the transfer of information is reliably possible only between the identical nodes. But same haven’t been achieved before in any kind of hybrid network of nodes. In order to make this happen scientists have to devise a new solution which can help in strengthening the hybrid quantum network or quantum internet at the same time provides a reliable transfer of information. It is worth noting that photos are the one which helps in the transfer of the information. This happens when single photo gets to interacts in a completely noise free environment with other nodes or the matter system present therein which helps in offering varying bandwidths and wavelengths.

The future prospects of the hybrid quantum internet

The traditional world wide web was developed in the early 1980’s wherein information used to flows in the ‘bits’. This information is effectively processed and modulated by the chips placed on the computing devices and transmitted by the light pulses. By lights pulses we effectively mean the use of the optical fibers for high speed transmission of information from one place to another. When we talk about the quantum information network then the bits are replaced with the quantum bit or more aptly called ‘qubits’. Bits are usually known to carry only 0s or 1s but the quantum network can hold superposition of both of these states efficiently.

So quantum network will be able to allow us to bring highly secure data transmission along with enhanced data processing capability backed with quantum computing and furthermore it will help in help in enhancing the capability of other application and programs. In short quantum internet or network has the inert capability to revolutionalize the world like never before.

read more
1 3 4 5 6 7 14
Page 5 of 14