Citation: Nokia Unveils 1 Green Phone, 1 Super Phone (2008, February 13) retrieved 18 August 2019 from https://phys.org/news/2008-02-nokia-unveils-green-super.html This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. The attraction of the Remade lies in its future potential. Although the phone isn’t functional, Nokia hopes that similar concepts could be implemented in future devices.The clamshell phone’s casing is made entirely from recycled aluminum cans, and its chassis is made from the plastic of recycled drink bottles. The rubber keys come from old car tires. The screen and circuit board also minimize environmental impact by using manufacturing techniques such as printed electronics. The display graphics are also specially selected to save energy.While the “Remade” concept itself isn’t planned for production, all the components are plausible and could be applied to a real phone. Nokia hopes that the knowledge gathered while designing and building the phone will play a role in future Nokia handsets.On the other hand, the N96 is the hot phone of today. The successor to the N95, the N96 contains 16GB of on-board memory, which is enough to hold about 40 hours of video or 12,000 songs. If that´s not enough space, the phone also has a slot for an 8GB microSD memory card. The device also has GPS, a 5-megapixel camera, a DVB-H television tuner for watching live TV, and even a little kick-stand so the phone can be propped up when viewing the 2.8-inch display. The N96 will be available in September for around US$800. via: PhoneMag and Gizmodo Nokia´s “Remade” recycled phone (left) and N96 “multi-media computer” phone (right). Among its many new devices at this week’s Mobile World Congress in Barcelona, the Finnish company Nokia is demonstrating two cell phones that are intriguing in very different ways. The “Remade” is built almost entirely from recycled materials, while the N96 is Nokia’s high-end “multimedia computer” phone.
The Global Positioning System (GPS) is almost everywhere these days, not just as navigational aids in vehicles and mobile phones but in many everyday industrial and commercial applications. GPS enables courier companies to track their shipments, for example, and it enables ATMs and financial institutions to time-stamp transactions. It is used in emergency hospital paging systems, and helps firefighters to find fires. Colonel David B. Goldstein, chief engineer for the upgrade said they know the world relies on GPS, but the ever increasing number of devices using GPS also increases the strain on the system.GPS uses a “constellation” of 24 satellites orbiting approximately 11,000 miles above the surface of the Earth, and the orbits are arranged so that at any time there are always at least a half dozen or so satellites above. GPS receivers pinpoint their location by working out exactly how far they are away from at least three or four of the GPS satellites by analyzing the radio-frequency signals transmitted continuously by the satellites. They receive extremely accurate information on the time from atomic clocks in the satellites.As part of the $8 billion upgrade the satellites will be replaced one by one to minimize the chance of disruption. Boeing Co’s Space and Intelligence Systems and Lockheed Martin are constructing 30 new satellites between them, which will allow for six spare satellites to be available if needed. The new satellites will eventually triple the signals available for commercial use. The equipment on the satellites will include even more accurate atomic clocks able to keep time to a fraction of a billionth of a second.The upgraded system will significantly increase the accuracy, allowing a location to be pinpointed to within just a couple of feet instead of the current +/-20 feet margin of error. It will also make the system faster, and there will be provision to prevent disruptions such as accidental jamming of GPS, which in the recent past have caused disruption to emergency services and mobile phone services, as well as causing power outages.GPS was originally developed by the Pentagon over 30 years ago at the Los Angeles Air Force base in El Segundo. Until GPS was developed vessels such as nuclear submarines, submerged for months at a time, had no precise way of knowing exactly where they were, and this meant the accuracy of any missiles fired would have been diminished. When the system was proposed by Air Force Colonel Bradford W. Parkinson three decades ago he was told it would be useless, and it had no future.An El Segundo team of scientists and engineers is among those working on the upgrade, which is expected to take around a decade. Senior space analyst for research company Teal Group, Marco Caceres, said the upgraded system will be able to deliver capabilities we have not seen before.The satellites used globally for GPS are controlled by the Pentagon in the U.S., but the European Union, China and Russia are all attempting to build their own GPS to reduce their reliance on U.S. military technology. Explore further Citation: GPS getting an upgrade – for $8 billion (2010, May 25) retrieved 18 August 2019 from https://phys.org/news/2010-05-gps-billion.html © 2010 PhysOrg.com (PhysOrg.com) — GPS is getting an upgrade costing $8 billion (US), which aims to increase the system’s accuracy, improve its reliability, and make the technology even more widespread. First Modernized GPS Satellite Built By Lockheed Martin Launched This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Sun Microsystems to offer ‘public cloud’ service “Energy efficiency is crucial in two contexts,” Tucker said. “Firstly, if the user device is a mobile device (phone, i-pad, PDA, etc.), then its battery lifetime is a key issue. Secondly, as the use of cloud services balloons, its energy consumption will likewise grow. The US Environmental Protection Agency estimated that in 2007 servers and data centers were responsible for about 0.5% of US greenhouse gas production. The greenhouse gas production that results from power consumption of data centers is expected to double between 2007 and 2020 if we just continue with business as usual. Without careful consideration of the power consumption of cloud services, their growing popularity will become a significant contributor to greenhouse gas production. Therefore, we need to develop technologies and strategies to address this issue before cloud services become more widespread.”When using the cloud for data storage (such as storing documents, photos, and videos using services such as Amazon Simple Storage), the researchers found that cloud computing can consume less power than conventional computing when the cloud service is used infrequently and at low intensities. This is because, at low usage levels, power consumption for storage dominates total power consumption, and power consumption for transport is minimal. But at medium and high usage levels, more energy is required to transport data, so that transport dominates total power consumption and greatly increases the overall energy consumed. Specifically, power for transport can be as low as 10% and 25% at low usage levels for private and public storage services, respectively, and nearly 60% and 90%, respectively, at high usage levels.But overall, cloud storage services use less energy compared to cloud software and cloud processing. For cloud software services (such as Google Docs), the power consumption in transport is negligibly small as long as screen refresh rates are low (lower than 0.1 frames/sec, where 1 frame/sec means that 100% of the screen changes every second; a smaller percentage of the screen changing corresponds to a smaller screen refresh rate). However, for cloud software services, the biggest factor determining energy efficiency is the number of users per server, where more users corresponds to lower power consumption per user. In this case, public cloud computing, with its larger number of users, would benefit more than private cloud computing.For cloud processing services (in which a server such as Amazon Elastic Compute Cloud processes large computational tasks only, and smaller tasks are processed on the user’s computer), the researchers again found that the cloud alternative can use lower consumption only under certain conditions. The results showed that, for public cloud processing services, data transport consumed large amounts of energy compared to private cloud processing services, particularly at high usage levels. The reason is that the large number of router hops required on the public Internet greatly increases the energy consumption in transport, and private cloud processing requires significantly fewer routers. Still, the researchers found that, for both public and private clouds, a cloud processing service is more energy-efficient than older-generation PCs.The results of the study mean different things for different users. As the researchers explain, home computer users can achieve significant energy savings by using low-end laptops for routine tasks and cloud processing services for computationally intensive tasks that are infrequent, instead of using a mid- or high-end PC. For corporations, it is less clear whether the energy consumption saved in transport with a private cloud compared to a public cloud offsets the private cloud’s higher energy consumption. Private clouds that serve a relatively small number of users may not benefit from the same energy-saving techniques due to their smaller scale. Overall, the researchers predict that the technology used in cloud computing – for example, data centers, routers, switches, etc. – will continue to become more energy-efficient. Most importantly, they recommend that one of the biggest areas of improvement is improving the energy efficiency of data transport, especially as cloud computing becomes more widespread.“Many industry participants see the evolution toward mobility will intrinsically mean an evolution toward cloud-based services,” Tucker said. “The reason is that mobile access devices will have limited processing and storage capacity (due to size and power constraints) and so the most convenient place to put the applications and data is in the cloud. The user device will contain little more than a browser when it is started up. Any application or data that it requires will be brought down from the cloud. When that application is finished, its data will be put back into the cloud and the application will be removed from the user device until it is again required. In this way, the user device is kept simple, energy-efficient and cheap.” This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. (PhysOrg.com) — Conventionally, data storage and data processing are done at the user’s own computer, using that computer’s storage system and processor. An alternative to this method is cloud computing, which is Internet-based computing that enables users at home or office computers to transfer data to a remote data center for storage and processing. Cloud computing offers potential benefits – especially financial ones – to users, but in a new study, researchers have investigated a different aspect of cloud computing: how does its energy consumption compare with conventional computing? In their study to be published in the Proceedings of the IEEE, Jayant Baliga and coauthors from the University of Melbourne in Victoria, Australia, have found that cloud computing is not always the greenest option. They investigated using cloud computing for three different services – storage, software, and processing – on public and private systems. (A public cloud is hosted on the Internet, and a private cloud is hosted within a company behind its firewall.) While previous studies of energy consumption in cloud computing have focused only on the energy consumed in the data center, the researchers found that transporting data between data centers and home computers can consume even larger amounts of energy than storing it.“The most important conclusion in our analysis is that, when comparing the energy consumption of cloud-based services with that of a typical desktop PC, we must include the energy consumption required to transport the data from the user into the cloud resources and back,” Rod Tucker, leader of the University of Melbourne research team, told PhysOrg.com. “This is particularly important if the cloud service is provided via the public Internet. Some papers that have claimed that cloud computing provides a ‘greener’ alternative to current desktop computing fail to include the energy consumption involved with transporting the data from the user into the cloud. In many cases, we may find that the data center used by the cloud-based services are located in another city, state or even country.”In general, not much attention has been paid to the energy consumption used in transmitting data, since cloud computing is more often praised for its other features. Some advantages of cloud computing are that it offers high-capacity storage and high-performance computing from any location with Internet access, while not requiring users to invest in new hardware or upgrade their software. Cloud computing systems can be free (such as Google Docs), or users may pay a yearly subscription fee or fee per resources used. Researchers have found that, at high usage levels, the energy required to transport data in cloud computing can be larger than the amount of energy required to store the data. Image credit: Wikimedia Commons. More information: Jayant Baliga, et al. “Green Cloud Computing: Balancing Energy in Processing, Storage and Transport.” Proceedings of the IEEE. To be published. DOI:10.1109/JPROC2010.2060451 Copyright 2010 PhysOrg.com. All rights reserved. This material may not be published, broadcast, rewritten or redistributed in whole or part without the express written permission of PhysOrg.com. Explore further Citation: How energy-efficient is cloud computing? (2010, October 8) retrieved 18 August 2019 from https://phys.org/news/2010-10-energy-efficient-cloud.html
It is an SSD card, named the OC177 DOC, and while that name may still need some tweaking, the product is certainly garnering media attention. The card is, as you have guesses by now, small. Its is roughly the size of a US quarter. For those of you without one in your pocket or purse, the dimensions come out to be roughly 22 x 22 x 1.8mm. This SSD can fit either 32GM or 64GB of flash memory, which is fairly impressive. To put that in perspective for you the 64GB SSD could hold roughly 14000 songs from your iTunes library. It is the same amount of memory as you get on higher capacity tablet PC’s.I know what you’re thinking. Storage is great, but what about speed? Data that is slow to be accessed can create a frustrating user experience no doubt. The OC177 DOC isn’t the fastest chip on the block, but it does clock in with a speed that is respectable. According to the information being released by the company it has a read speed of roughly 70MB/s and a write speed of 40MB/s.No word yet on the costs, but you can expect to see the OC177 DOC showing up in devices by the end of the third quarter of 2011. (PhysOrg.com) — The world of technology is a lot like Janus. For those of you who slept through the mythology portion of your high school world lit, Janus has two faces, looking in opposite directions, a lot like the recent innovations we have seen. With most tech innovations, things are either getting really big or getting really small. Today, we are going to look at a new bit of technology that falls to the small side of the equation. Foremay, a maker of portable memory tools, has begun to show off its latest micro memory product. © 2010 PhysOrg.com Citation: 64GB SSD memory in a quarter sized chip (2011, March 30) retrieved 18 August 2019 from https://phys.org/news/2011-03-64gb-ssd-memory-quarter-sized.html Explore further Toshiba Launches 256GB Solid State Drives with MLC More information: PDF release: www.foremay.net/pr/disk-on-chi … lest-ssd-foremay.pdf This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Citation: Colorful creates passively cooled Nvidia graphics card (2012, June 29) retrieved 18 August 2019 from https://phys.org/news/2012-06-passively-cooled-nvidia-graphics-card.html PNY Technologies Verto GeForce 6200 AGP 8X Available Now More information: www.expreview.com/20045.html (in Chinese) The GTX680 is Nvidia’s powerful single-core graphics card. GeForce refers to a brand of graphics processing units (GPUs) designed by Nvidia. In March it was announced that the first chip based on the Kepler architecture was hitting the market, aboard a new graphics card called the GeForce GTX 680. The passively cooled GeForce GTX 680 model uses 20 heatpipes and two aluminum heatsinks. Colorful claims this is the first zero-noise GTX 680 solution.Colorful is considered one of Nvidia’s most important board partners in Asia. Established in 1995, Colorful conducts research, designs, manufactures, and sells consumer graphics cards. Those familiar with Colorful regard it as a company that frequently comes up with surprises. One such description is that Colorful is “an unorthodox producer of Nvidia cards,” according to PC reviews site, HEXUS. A Singapore-based technology site refers to Colorful as making “some of the most outrageous and over-the-top graphics cards you will find.”Colorful‘s “cooled” solution has 20 heatpipes combined with 280 aluminum fins. In reviewing the announcement, a note of concern was struck over the fact that Colorful has not yet mentioned clock speeds. Geek.com wonders if they might have underclocked the GPU to help keep temperatures to a minimum. “If it hasn’t been underclocked, then it may be a card worth keeping an eye out for,” said the report. The techPowerup site said that the design guarantees reliable silent operation at reference clock speeds or mild overclocking.There has been no price or release date announced; Colorful is said to be still assessing the marketability of the design.When Colorful first showed off the iGame card at Computex 2012 in Taipei earlier this month, the product was described as “iGame GeForce GTX 680 Silent” and drew prompt attention as a card that relies completely on passive cooling, not a fan,.This is not the first time, however, that a manufacturer has achieved a passively cooled graphics card, and more competition is likely to emerge sooner than later, under different partnerships. Sapphire announced in early June that it had come up with its new passively cooled Radeon HD 7770 card. Like the Colorful entry, this does not use a fan but instead dissipates heat via a “heatspreader.” Sapphire partners with AMD. With a big enough heatsink, can a high-end graphics card be passively cooled? No fan? No noise? China-based manufacturer Colorful showed off its answer, introduced at Computex earlier this month. Colorful has what it says is the world’s first passive-cooled GeForce GTX 680 graphics card, the iGame GTX 680 “passive.” Instead of a cooling fan, two very large aluminum heatsinks carry the task of drawing heat away from the core. Colorful set out to eliminate noise completely by a passively cooled graphics card in its iGAME range that uses Nvidia‘s GeForce GTX 680 chipset. Explore further © 2012 Phys.Org This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Creating earthquake heat maps—temperature spikes leave clues in the rock Completely fragmented garnet crystal (about 1 millimeter in diameter) located at the earthquake slip surface. The upper left part of the crystal was cut off by the earthquake. Credit: Kristina Dunkel Explore further Journal information: Science Advances (Phys.org)—A team of researchers from Norway, France and the Netherlands has found a new way to identify and measure seismic slips that occurred along fault lines during ancient earthquakes. In their paper published in the journal Science Advances, the team describes their study of garnet crystals found along a fault zone and what they discovered. Focused ion beam image of small mineral inclusions that formed inside the fragmented garnet crystal due to fluid transport into pores in the garnet in the wake of the earthquake. The largest inclusions are about 20 micrometers long. Credit: Made by Oliver Plümper PausePlay% buffered00:0000:00UnmuteMuteDisable captionsEnable captionsSettingsCaptionsDisabledQuality0SpeedNormalCaptionsGo back to previous menuQualityGo back to previous menuSpeedGo back to previous menu0.5×0.75×Normal1.25×1.5×1.75×2×Exit fullscreenEnter fullscreen Completely fragmented garnet crystal (about 1 millimeter in diameter) located at the earthquake slip surface. The upper left part of the crystal was cut off by the earthquake. Credit: Kristina Dunkel Field image showing the fault displacement (visible by the offset of the dark layer) associated with an earthquake of an estimated magnitude around 6. This earthquake occurred in Western Norway some 50 kilometers below the Earth’s surface about 420 million years ago. Credit: Bjørn Jamtveit Because of erosion, there are very few ways to identify or measure an earthquake that occurred in the distant past. For that reason, geologists have been looking for better markers. In this new effort, the researchers report on one new marker in Norway that they believe could prove useful for identifying similar early earthquakes in other parts of the world.To learn more about ancient earthquakes, the researchers were studying rocks along a fault line at Bergen Arcs in western Norway. There, they came across garnet formations that looked as if they had been melted and smeared as the edges of two continents slid past one another. They extracted some samples and took them back to their lab for analysis. Peeking at them through a microscope revealed microfractures so small that they did not change the shape of the rock. A closer look showed that melted material had been injected into the fractures leaving behind a network of minerals, among them uranium. The team suggests the micro-fracturing occurred as a result of the earthquake. Next, because uranium decays to lead, the researchers measured the lead content in the material within the fractures, and calculated a date for when the earthquake occurred—approximately 420 million years ago. The researchers also found that they could use the garnet to measure the amount of slippage that occurred between layers of rock. Their calculations indicated the earthquake would have measured 6 to 6.5 on the Richter scale, which means it was relatively strong. They note that humans would not have been around to feel it, but animals at the time likely did. Citation: Garnet crystal microstructures formed during ancient earthquake provide evidence for seismic slip rates along a fault (2017, February 23) retrieved 18 August 2019 from https://phys.org/news/2017-02-garnet-crystal-microstructures-ancient-earthquake.html © 2017 Phys.org Field image showing the fault displacement (visible by the offset of the dark layer) associated with an earthquake of an estimated magnitude around 6. This earthquake occurred in Western Norway some 50 kilometers below the Earth’s surface about 420 million years ago. Credit: Bjørn Jamtveit More information: Håkon Austrheim et al. Fragmentation of wall rock garnets during deep crustal earthquakes, Science Advances (2017). DOI: 10.1126/sciadv.1602067AbstractFractures and faults riddle the Earth’s crust on all scales, and the deformation associated with them is presumed to have had significant effects on its petrological and structural evolution. However, despite the abundance of directly observable earthquake activity, unequivocal evidence for seismic slip rates along ancient faults is rare and usually related to frictional melting and the formation of pseudotachylites. We report novel microstructures from garnet crystals in the immediate vicinity of seismic slip planes that transected lower crustal granulites during intermediate-depth earthquakes in the Bergen Arcs area, western Norway, some 420 million years ago. Seismic loading caused massive dislocation formations and fragmentation of wall rock garnets. Microfracturing and the injection of sulfide melts occurred during an early stage of loading. Subsequent dilation caused pervasive transport of fluids into the garnets along a network of microfractures, dislocations, and subgrain and grain boundaries, leading to the growth of abundant mineral inclusions inside the fragmented garnets. Recrystallization by grain boundary migration closed most of the pores and fractures generated by the seismic event. This wall rock alteration represents the initial stages of an earthquake-triggered metamorphic transformation process that ultimately led to reworking of the lower crust on a regional scale. Play 3D visualization of the structure. Credit: Made by Oliver Plümper The researchers suggest their method could be used as a marker for finding other ancient earthquake sites and to aid in measuring their slip rates, and thus earthquake magnitude as well. This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Tennis ace-turned-actor Leander Paes was on Wednesday felicitated at a star-rated property in south Delhi by a jewellery house. Actor Priyanshu Chatterjee was also spotted at the do which was attended by Delhi socialites Anjana Kuthiala, Shahnaz Husain, Sapna Maken, Ajiesh Oberoi among others. We got you a recap.
The 38th edition of IHGF-Delhi Autumn Fair 2014 will be flagged off on 14 October in the Capital at India Expo Centre and Mart. Being organised by Export Promotion Council for Handicrafts (EPCH), the event will witness about 2700 exhibitors from across the country and their products spread over 15 halls covering approx. 1,90,000 sq. mtrs space.According to Lekhraj Maheshwari, Chairman, EPCH, more than 4,000 buyers are expected to source their requirement at this fair. Countries like USA, UK, Japan, France, Italy, Canada, Switzerland, Norway, Sweden, China, Australia are expected to send buyers to this fair. However, buyers from Latin American region particularly Argentina, Colombia, Brazil, Panama, Chile, Central Asia, Africa and middle east will also be sourcing their requirement. The products on display include a huge variety of handicrafts, gifts and lifestyle products, from a cross section of handmade manufacturers from India.When: 14 – 18 October Where: India Expo Centre & Mart, Greater Noida
Kolkata: The coaching course for Civil Service aspirants at Sidho Kanho Birsha University has evoked great response among the students. The course is the only of its kind offered by a university for its students. It may be mentioned that Chief Minister Mamata Banerjee has said over and again urging the students to sit for the civil service examination organised by the Public Service Commission (PSC) and Union Public Service Commission ( UPSC). She has even allowed students of different universities to be present at the administrative review meetings which she holds in every district. Also Read – Heavy rain hits traffic, flightsThe course at Sidho Kanho Birsha University began on march 29 in the presence of Vice-Chancellor Professor Dipak Ranjan Mondol. Gurudas Mondol, placement officer is coordinating the classes and mock tests.The classes are now held three days in a week, namely, – Tuesday, Wednesday and Thursday. Two classes are held between 10 am and 11 am and 4 pm to 5 pm, before and after the university classes. There are 56 students and the monthly tuition fees is Rs 100. Steps are on to hold the classes throughout the week. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedIt has been decided by the university authorities that its teachers will take the subjects while WBCS and IAS officers will teach the general studies classes. It may be mentioned that the private institutions preparing students or WBCS examination take over a lakh as tuition fees.Dipankar Mahato, IAS deputy Jute Commissioner is taking classes. Babulal Mahato, WBCS has been made the mentor.The students are using books and periodicals and the Paschimanchal Unnyan Affairs department has proposed to set up a building where classes will be held. The building will have a conference hall and a library.It may be mentioned that there are a few institutions in Bengal that offer coaching for PSC and UPSC examinations. Presidency College had offered courses for UPSC examination. But the coaching classes were not meant for the students of the college. However, the coaching programme stopped after the college was upgraded to a university.
Kolkata: The Mamata Banerjee government has been recently certified as number one in preventing parent-to-child transmission of HIV.Chief Minister Mamata Banerjee tweeted in this connection on Friday: “Today is HIV Vaccine Awareness Day. I am proud to say that the Central Govt agency NACO has recently certified Bengal as Number 1 in preventing parent-to-child transmission of HIV.”It may be mentioned that after becoming successful in e-governance, ease of doing business and MSME sector, Bengal has once again topped in the country, this time in the health sector, by preventing 16.5 lakh cases of parent-to-child transmission of HIV. Also Read – Heavy rain hits traffic, flightsAfter the change of guard in the state, Bengal has witnessed a major development in the health sector, with several multi-superspeciality hospitals being set up and free treatment being provided in state-run hospitals. Free treatment is also being provided to people suffering from HIV.In a bid to take stock on the situation at present and to discuss different issues related to the state’s health sector, the Chief Minister will be holding a meeting in Nabanna Sabhaghar on May 22. Also Read – Speeding Jaguar crashes into Merc, 2 B’deshi bystanders killedSources said that top brass of the state Health department will be present in the meeting. At the same time, senior officials of all the medical colleges and hospitals have been directed to be present in the meeting. Heads of all the departments also have to attend the meeting.Sources said that representatives of Rogi Kalyan Samities will also be present in the meeting. According to the sources in Nabanna, there will be a turnout of around 425 officials in the meeting.It may be mentioned that Banerjee had held such a meeting soon after coming to power in 2011, in which block medical officers were also present. The meeting was held in Town Hall.