Perlu Network score measures the extent of a member’s network on Perlu based on their connections, Packs, and Collab activity.
View our support article for more information.
Our mission is to empower every person and every organization on the planet to achieve more.
For many years, Microsoft researchers and their collaborators have been exploring ways to make existing storage approaches more efficient and cost-effective, while also forging entirely new paths – including storing data in media such as glass, holograms and even DNA. If you think of most of the technologies we use to store data today – things like flash, things like hard disk drives, things like tape – Holographic optical storage systems store data by recording the interference between the wave-fronts of a modulated optical field, containing the data, and a reference optical field, as a refractive index variation inside the storage media. The DNA Storage project enables molecular-level data storage into DNA molecules by leveraging biotechnology advances in synthesizing, manipulating and sequencing DNA to develop archival storage.
With the Azure Spatial Anchors Linux SDK, robots can now use Azure Spatial Anchors to localize and share information within this mixed reality ecosystem. Researchers can use the SDK, which allows robots with an onboard camera and a pose estimation system to access the service, to localize robots to the environment, to other robots, and to people using mixed reality devices, opening the door to better human-robot interaction and greater robot capabilities. By tracking salient feature points in a sequence of images from their onboard cameras and fusing that with inertial measurements, mixed reality devices can both estimate how they’re moving and build a sparse local map of where these feature points are in 3D. Android and iOS mobile devices utilize the same type of visual SLAM algorithms—via ARCore and ARKit, respectively—to render augmented reality content on screen, and these algorithms produce the same kind of sparse maps as mixed reality devices. When a mixed reality device observes the same place in the world at some later time and the device queries ASA with a local map of the place, some of the feature points in the query map should match the ones in the cloud map, which allows ASA to robustly compute a relative six-degree-of-freedom pose for the device using these correspondences.
The robotized haptic handle deploys when needed, approaching and finally reaching the hand, creating the feeling of first contact—going from a bare hand to one holding an object—thus mimicking our natural interaction with physical objects in a way that traditional handheld controllers can’t. A combination of mechanics, electronics, firmware, and software works together from the moment the apple enters reaching range to the moment it’s resting in the palm of the individual’s hand. As the individual’s hand closes in on the apple, within 10 centimeters of it, the handle moves proportionally closer and then finally lands in her palm at the same time she wraps her fingers around the virtual fruit. As is the case with dropping an apple into a basket, throwing relies on PIVOT sensing the motion of the hand and the release of the haptic handle, which coincides with the release of the virtual object.
In my role, I enjoy a unique perspective in viewing the relationship among three attributes of human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). By maximizing the information-theoretic mutual information of these representations based on 50 billion unique query-documentation pairs as training data, X-code successfully learned the semantic relationships among queries and documents at web scale, and it demonstrated strong performance in various natural language processing tasks such as search ranking, ad click prediction, query-to-query similarity, and documentation grouping. Z-code expands monolingual X-code by enabling text-based multilingual neural translation for a family of languages. Because of transfer learning, and the sharing of linguistic elements across similar languages, we’ve dramatically improved the quality, reduced the costs, and improved efficiency for machine translation capability in Azure Cognitive Services (see Figure 4 for details).