If we try to dissolve sand into water it does not dissolve but we are trying to invent some special type of method by which we can dissolve sand into water or any other liquid. The main idea works behind it is "there is interatomic space between particles of hydrogen and oxygen in water".
Monday, 23 November 2015
SAND IN THE WATER
If we try to dissolve sand into water it does not dissolve but we are trying to invent some special type of method by which we can dissolve sand into water or any other liquid. The main idea works behind it is "there is interatomic space between particles of hydrogen and oxygen in water".
Sunday, 15 November 2015
"HYPERSONIC SPEED"
HYPERSONIC
SPEED
In aerodynamics, a hypersonic speed is one that is highly supersonic. Since the 1970s, the term has generally been assumed to refer to speeds of Mach 5 and above.The precise Mach number at which a craft can be said to be flying at hypersonic speed varies, since individual physical changes in the airflow (like molecular dissociation and ionization) occur at different speeds; these effects collectively become important around Mach 5. The hypersonic regime is often alternatively defined as speeds where ramjets do not produce net thrust.
Characteristics
of flow
While the definition of hypersonic
flow can be quite vague and is generally debatable (especially due to the
lack of discontinuity between supersonic and hypersonic flows), a
hypersonic flow may be characterized by certain physical phenomena that
can no longer be analytically discounted as in supersonic flow. The
peculiarity in hypersonic flows are as follows:
1-Shock layer
2-Aerodynamic heating
3-Entropy layer
4-Real gas effects
5-Low density effects
6-Independence of aerodynamic coefficients with Mach number.
- Small shock stand-off distance
As a body's Mach number increases, the density behind the
shock generated by the body also increases, which corresponds to a
decrease in volume behind the shock wave due to conservation of
mass. Consequently, the distance between the shock and the body decreases
at higher Mach numbers.
- Entropy layer
As Mach numbers increase,
the entropy change across the shock also increases, which results in
a strong entropy gradient and highly vortical flow that mixes
with the boundary layer.
- Viscous interaction
A portion of the large kinetic energy associated with flow at high
Mach numbers transforms into internal energy in the fluid due to
viscous effects. The increase in internal energy is realized as an
increase in temperature. Since the pressure gradient normal to the flow
within a boundary layer is approximately zero for low to moderate
hypersonic Mach numbers, the increase of temperature through the boundary
layer coincides with a decrease in density. This causes the bottom of
the boundary layer to expand, so that the boundary layer over the body
grows thicker and can often merge with the shock wave near the body
leading edge.
- High temperature flow
High temperatures due to a manifestation of viscous dissipation causenon-equilibrium
chemical flow properties such as vibrational excitationand dissociation and
ionization of molecules resulting in convective and radiative heat-flux.
Classification of Mach regimes
Although "subsonic" and "supersonic" usually refer
to speeds below and above the local speed of sound respectively, aerodynamicists often
use these terms to refer to particular ranges of Mach values. This
occurs because a "transonic regime" exists around M=1 where
approximations of the Navier–Stokes equations used for subsonic design no
longer apply, partly because the flow locally exceeds M=1 even when the
freestream Mach number is below this value. The "supersonic
regime" usually refers to the set of Mach numbers for which
linearised theory may be used; for example, where the (air) flow is not
chemically reacting and where heat transfer between air and vehicle may be
reasonably neglected in calculations. Generally, NASA defines
"high" hypersonic as any Mach number from 10 to 25, and re-entry
speeds as anything greater than Mach 25. Among the aircraft operating in
this regime are the Space Shuttle and (theoretically) various developing
spaceplanes.
Similarity parameters
The categorization of airflow relies on a number of similarity
parameters, which allow the simplification of a nearly infinite number of test
cases into groups of similarity. For transonic and compressible flow, the
Mach and Reynolds numbers alone allow good categorization of many
flow cases.
Hypersonic flows, however, require other similarity parameters. First,
the analytic equations for the oblique shock angle become nearly
independent of Mach number at high (~>10) Mach numbers. Second, the
formation of strong shocks around aerodynamic bodies means that the
freestream Reynolds number is less useful as an estimate of the behavior
of the boundary layer over a body (although it is still important).
Finally, the increased temperature of hypersonic flows mean that real gas
effects become important. For this reason, research in hypersonics is
often referred to as aerothermodynamics, rather than
aerodynamics. The introduction of real gas effects means that more
variables are required to describe the full state of a gas. Whereas a
stationary gas can be described by three variables (pressure, temperature,
adiabatic index), and a moving gas by four (flow velocity), a hot gas in
chemical equilibrium also requires state equations for the chemical
components of the gas, and a gas in nonequilibrium solves those state
equations using time as an extra variable. This means that for a
nonequilibrium flow, something between 10 and 100 variables may be
required to describe the state of the gas at any given time. Additionally,
rarefied hypersonic flows (usually defined as those with a Knudsen number
above 0.1) do not follow the Navier-Stokes equations. Hypersonic
flows are typically categorized by their total energy, expressed as total
enthalpy (MJ/kg), total pressure (kPa-MPa), stagnation pressure (kPa-MPa),
stagnation temperature (K), or flow velocity (km/s). Wallace D. Hayes
developed a similarity parameter, similar to the Whitcomb area rule, which
allowed similar configurations to be compared.
Regimes
Hypersonic flow can be approximately separated into a number of regimes. The
selection of these regimes is rough, due to the blurring of the boundaries
where a particular effect can be found.
- Perfect gas
In this regime, the gas can be regarded as an ideal gas. Flow in
this regime is still Mach number dependent. Simulations start to depend
on the use of a constant-temperature wall, rather than the adiabatic
wall typically used at lower speeds. The lower border of this region is
around Mach 5, where ramjets become inefficient, and the upper border
around Mach 10-12.
- Two-temperature ideal gas
This is a subset of the perfect gas regime, where the gas can be considered
chemically perfect, but the rotational and vibrational temperatures of the
gas must be considered separately, leading to two temperature models. See
particularly the modeling of supersonic nozzles, where vibrational
freezing becomes important.
- Dissociated
gas
In this regime, diatomic or polyatomic gases (the gases found in
most atmospheres) begin to dissociate as they come into contact with the
bow shock generated by the body. Surface catalysis plays a role in
the calculation of surface heating, meaning that the type of surface
material also has an effect on the flow. The lower border of this regime
is where any component of a gas mixture first begins to dissociate in
the stagnation point of a flow (which for nitrogen is around 2000 K). At
the upper border of this regime, the effects of ionization start to have
an effect on the flow.
- Ionized gas
In this regime the ionized electron population of the stagnated
flow becomes significant, and the electrons must be modeled
separately. Often the electron temperature is handled separately from
the temperature of the remaining gas components. This region occurs
for freestream flow velocities around 10–12 km/s. Gases in this region
are modeled as non-radiating plasmas.
As Mach numbers increase, the entropy change across the shock also increases, which
results in a strong entropy gradient and highly vortical flow that mixes
with the boundary layer.
- Radiation-dominated regime
Above around 12 km/s, the heat transfer to a vehicle changes from
being conductively dominated to radiatively dominated. The modeling of
gases in this regime is split into two classes:
Optically thin: where the gas
does not re-absorb radiation emitted from other parts of the gas
Optically thick: where the radiation
must be considered a separate source of energy.The modeling of optically
thick gases is extremely difficult, since, due to the calculation of the
radiation at each point, the computation load theoretically expands exponentially
as the number of points considered increases.
Next time i will share the information about the invention of hypersonic
flight...
[SAMPLE IMAGE]
Friday, 13 November 2015
MOST RECENT
THE SQUARE
Jack Dorsey, the co-inventor of Twitter, is promoting his latest invention called the Square.
The square is a small plug-in attachment to your mobile phone that allows you to receive credit card payments.
The idea originated from Dorsey's friend Jim McKelvey who was unable to sell some glass work to a customer because he couldn't accept a particular card being used.
Accepting credit card payments for something you're selling isn't always easy, especially if you are mobile like a tradesman, delivery service or a vendor at a trade show.
This latest invention uses a small scanner that plugs into the audio input jack on a mobile device.
It reads information on a credit card when it is swiped. The information is not stored on the device but is encrypted and sent over secure channels to banks.
It basically makes any mobile phone a cash register for accepting card payments.
As a payer, you receive a receipt via email that can be instantly accessed securely online. You can also use a text message to authorize payment in real time.
Retailers can create a payer account for their customers which accelerates the payment process.
For example, a cardholder can assign a photo to their card so their photo will appear on the phone for visual identity confirmation. Mobile devices with touch screens will also allow you to sign for goods.
There are no contracts, monthly fees, or hidden costs to accept card payments using Square and it is expected the plug-in attachment will also be free of charge.
A penny from every transaction will also be given to a cause of your choice.
As with Twitter, it's anticipated that Dorsey will direct the company based upon feedback from users.
Square Inc. has offices in San Francisco, Saint Louis and New York and is currently beta testing the invention with retailers in the United States.
Source: squareup.com
Labels:
THE SQUARE
Location:
India
Wednesday, 11 November 2015
Invention Of Telescope
TELESCOPE
The first Refracting telescope was invented by Hans Lippershey in 1608.
A telescope is an instrument that aids in the observation of remote objects by collecting electromagnetic radiation (such as visible light). The first known practical telescopes were invented in the Netherlands at the beginning of the 17th century, using glass lenses. They found use in terrestrial applications and astronomy.Within a few decades, the reflecting telescope was invented, which used mirrors. In the 20th century many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s. The word telescope now refers to a wide range of instruments detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors.The word "telescope" (from the Ancient Greek τῆλε, tele "far" and σκοπεῖν, skopein "to look or see"; τηλεσκόπος, teleskopos "far-seeing") was coined in 1611 by the Greek mathematician Giovanni Demisiani for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei.In the Starry Messenger, Galileo had used the term "perspicillum"
HISTORY
The earliest recorded working telescopes were the refracting telescopes that appeared in the Netherlands in 1608. Their development is credited to three individuals: Hans Lippershey and Zacharias Janssen, who were spectacle makers in Middelburg, and Jacob Metius of Alkmaar.Galileo heard about the Dutch telescope in June 1609, built his own within a month, and improved upon the design in the following year. In the same year, Galileo became the first person to point a telescope skyward in order to make telescopic observations of a celestial object.The idea that the objective, or light-gathering element, could be a mirror instead of a lens was being investigated soon after the invention of the refracting telescope. The potential advantages of using parabolic mirrors—reduction of spherical aberration and no chromatic aberration—led to many proposed designs and several attempts to build reflecting telescopes. In 1668, Isaac Newton built the first practical reflecting telescope, of a design which now bears his name, the Newtonian reflector.The invention of the achromatic lens in 1733 partially corrected color aberrations present in the simple lens and enabled the construction of shorter, more functional refracting telescopes. Reflecting telescopes, though not limited by the color problems seen in refractors, were hampered by the use of fast tarnishing speculum metal mirrors employed during the 18th and early 19th century—a problem alleviated by the introduction of silver coated glass mirrors in 1857, and aluminized mirrors in 1932. The maximum physical size limit for refracting telescopes is about 1 meter (40 inches), dictating that the vast majority of large optical researching telescopes built since the turn of the 20th century have been reflectors. The largest reflecting telescopes currently have objectives larger than 10 m (33 feet), and work is underway on several 30-40m designs.The 20th century also saw the development of telescopes that worked in a wide range of wavelengths from radio to gamma-rays. The first purpose built radio telescope went into operation in 1937. Since then, a tremendous variety of complex astronomical instruments have been developed.
TYPES
The name "telescope" covers a wide range of instruments. Most detect electromagnetic radiation, but there are major differences in how astronomers must go about collecting light (electromagnetic radiation) in different frequency bands.Telescopes may be classified by the wavelengths of light they detect:X-ray telescopes, using shorter wavelengths than ultraviolet light
Ultraviolet telescopes, using shorter wavelengths than visible light
Optical telescopes, using visible lightInfrared telescopes, using longer wavelengths than visible lightSubmillimetre telescopes, using longer wavelengths than infrared lighFresnel Imager, an optical lens technologyX-ray optics, optics for certain X-ray wavelengthsAs wavelengths become longer, it becomes easier to use antenna technology to interact with electromagnetic radiation (although it is possible to make very tiny antenna). The near-infrared can be handled much like visible light, however in the far-infrared and submillimetre range, telescopes can operate more like a radio telescope. For example, the James Clerk Maxwell Telescope observes from wavelengths from 3 μm (0.003 mm) to 2000 μm (2 mm), but uses a parabolic aluminum antenna.[11] On the other hand, the Spitzer Space Telescope, observing from about 3 μm (0.003 mm) to 180 μm (0.18 mm) uses a mirror (reflecting optics). Also using reflecting optics, the Hubble Space Telescope with Wide Field Camera 3 can observe from about 0.2 μm (0.0002 mm) to 1.7 μm (0.0017 mm) (from ultra-violet to infrared light).
OPTICAL TELESCOPE
main optical types:
The refracting telescope which uses lenses to form an image.
The reflecting telescope which uses an arrangement of mirrors to form an image.
The catadioptric telescope which uses mirrors combined with lenses to form an image.Beyond these basic optical types there are many sub-types of varying optical design classified by the task they perform such as astrographs, comet seekers, solar telescope, etc.
At the photon energy of shorter wavelengths and higher frequency, fully reflecting optics rather than glancing-incident optics are used. Telescopes such as TRACE and SOHO use special mirrors to reflect Extreme ultraviolet, producing higher resolution and brighter images than otherwise possible. A larger aperture does not just mean that more light is collected, it also enables a finer angular resolution.Telescopes may also be classified by location: ground telescope, space telescope, or flying telescope. They may also be classified by whether they are operated by professional astronomers or amateur astronomers. A vehicle or permanent campus containing one or more telescopes or other instruments is called an observatory.
RADIO TELESCOPES
Radio telescopes are directional radio antennas used for radio astronomy. The dishes are sometimes constructed of a conductive wire mesh whose openings are smaller than the wavelength being observed. Multi-element Radio telescopes are constructed from pairs or larger groups of these dishes to synthesize large 'virtual' apertures that are similar in size to the separation between the telescopes; this process is known as aperture synthesis. As of 2005, the current record array size is many times the width of the Earth—utilizing space-based Very Long Baseline Interferometry (VLBI) telescopes such as the Japanese HALCA (Highly Advanced Laboratory for Communications and Astronomy) VSOP (VLBI Space Observatory Program) satellite. Aperture synthesis is now also being applied to optical telescopes using optical interferometers (arrays of optical telescopes) and aperture masking interferometry at single reflecting telescopes. Radio telescopes are also used to collect microwave radiation, which is used to collect radiation when any visible light is obstructed or faint, such as from quasars. Some radio telescopes are used by programs such as SETI and the Arecibo Observatory to search for extraterrestrial life.
X-RAY TELESCOPES
X-ray telescopes can use X-ray optics, such as a Wolter telescopes composed of ring-shaped 'glancing' mirrors made of heavy metals that are able to reflect the rays just a few degrees. The mirrors are usually a section of a rotated parabola and a hyperbola, or ellipse. In 1952, Hans Wolter outlined 3 ways a telescope could be built using only this kind of mirror. Examples of an observatory using this type of telescope are the Einstein Observatory, ROSAT, and the Chandra X-Ray Observatory. By 2010, Wolter focusing X-ray telescopes are possible up to 79 keV.
GAMMA-RAY TELESCOPES
Higher energy X-ray and Gamma-ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image.X-ray and Gamma-ray telescopes are usually on Earth-orbiting satellites or high-flying balloons since the Earth's atmosphere is opaque to this part of the electromagnetic spectrum. However, high energy X-rays and gamma-rays do not form an image in the same way as telescopes at visible wavelengths. An example of this type of telescope is the Fermi Gamma-ray Space Telescope.The detection of very high energy gamma rays, with shorter wavelength and higher frequency than regular gamma rays, requires further specialization. An example of this type of observatory is VERITAS. Very high energy gamma-rays are still photons, like visible light, whereas cosmic rays includes particles like electrons, protons, and heavier nuclei.A discovery in 2012 may allow focusing gamma-ray telescopes.[17] At photon energies greater than 700 keV, the index of refraction starts to increase again
HIGH-ENERGY TELESCOPES
High-energy astronomy requires specialized telescopes to make observations since most of these particles go through most metals and glasses.In other types of high energy particle telescopes there is no image-forming optical system. Cosmic-ray telescopes usually consist of an array of different detector types spread out over a large area. A Neutrino telescope consists of a large mass of water or ice, surrounded by an array of sensitive light detectors known as photomultiplier tubes. Energetic neutral atom observatories like Interstellar Boundary Explorer detect particles traveling at certain energies.
Tuesday, 10 November 2015
Some Heat Related Terminology
THERMODYNAMICS
- The branch of physical science that deals with the relations between heat and other forms of energy (such as mechanical, electrical, or chemical energy), and, by extension, of the relationships between all forms of energy.
- Heat is energy can be converted from one form to another, or transferred from one object to another. For example, a stove burner converts electrical energy to heat and conducts that energy through the pot to the water. This increases the kinetic energy of the water molecules, causing them to move faster and faster. At a certain temperature (the boiling point), the atoms have gained enough energy to break free of the molecular bonds of the liquid and escape as vapor.
Specific heat
The amount of heat required to increase the temperature of a certain mass of a substance by a certain amount is called specific heat, or specific heat capacity, according to Wolfarm Research. The conventional unit for this is calories per gram per kelvin. The calorie is defined as the amount of heat energy required to raise the temperature of 1 gram of water at 4 C by 1 degree.The specific heat of a metal depends almost entirely on the number of atoms in the sample, not its mass. For instance, a kilogram of aluminum can absorb about seven times more heat than a kilogram of lead. However, lead atoms can absorb only about 8 percent more heat than an equal number of aluminum atoms. A given mass of water, however, can absorb nearly five times as much heat as an equal mass of aluminum. The specific heat of a gas is more complex and depends on whether it is measured at constant pressure or constant volume.Thermal conductivity
Thermal conductivity (k) is “the rate at which heat passes through a specified material, expressed as the amount of heat that flows per unit time through a unit area with a temperature gradient of one degree per unit distance,” according to the Oxford Dictionary. The unit for k is watts (W) per meter (m) per kelvin (K). Values of k for metals such as copper and silver are relatively high at 401 and 428 W/m·K, respectively. This property makes these materials useful for automobile radiators and cooling fins for computer chips because they can carry away heat quickly and exchange it with the environment. The highest value of k for any natural substance is diamond at 2,200 W/m·K.Other materials are useful because they are extremely poor conductors of heat; this property is referred to as thermal resistance, or R-value, which describes the rate at which heat is transmitted through the material. These materials, such as rock wool, goose down and Styrofoam, are used for insulation in exterior building walls, winter coats and thermal coffee mugs. R-value is given in units of square feet times degrees Fahrenheit times hours per British thermal unit (ft2·°F·h/Btu) for a 1-inch-thick slab.
Newton's Law of Cooling
In 1701, Newton first stated his Law of Cooling in a short article titled "Scala graduum Caloris" ("A Scale of the Degrees of Heat") in the Philosophical Transactions of the Royal Society. Newton's statement of the law translates from the original Latin as, "the excess of the degrees of the heat ... were in geometrical progression when the times are in an arithmetical progression." Worcester Polytechnic Institute gives a more modern version of the law as "the rate of change of temperature is proportional to the difference between the temperature of the object and that of the surrounding environment."
This results in an exponential decay in the temperature difference. For example, if a warm object is placed in a cold bath, within a certain length of time, the difference in their temperatures will decrease by half. Then in that same length of time, the remaining difference will again decrease by half. This repeated halving of the temperature difference will continue at equal time intervals until it becomes too small to measure.
INNOVATIVE Mind Of A Washerwoman
FORTH FLOTATION METHOD
The mining field wouldn’t be the same without this innovation, considered one of the greatest technologies applied to the industry in the twentieth century. Its consequent development boosted the recovery of valuable minerals like copper, for instance. Our world, full of copper wires used for electrical conduction and electrical motors, wouldn’t be the same without this innovative process.
One can do wounder if he or she has a scientific temperament and is
attentive to observations. A washerwoman had an INNOVATIVE mind too.
While washing a miners overalls, she noticed that sand and similar
dirt fell to the bottom of the washtub. What was peculiar, the copper
bearing compounds that had come to the clothes from the mines, were
caught in the soapsuds and so they came to the top. One of her clients
was a chemist, Mrs. Carrie Everson. The washerwoman told her
experience to Mrs. Everson. The latter thought that the idea could be
used for separating copper compounds from rocky and earth materials on
large scale. This way an invention was born. At that time only those
ores were used for extraction of copper, which contained large amount
of the metal. Invention of the FORTH FLOTATION Method made copper
mining profitable even from the low-grade ores. World production of
copper soared and the metal became cheap.
The mining field wouldn’t be the same without this innovation, considered one of the greatest technologies applied to the industry in the twentieth century. Its consequent development boosted the recovery of valuable minerals like copper, for instance. Our world, full of copper wires used for electrical conduction and electrical motors, wouldn’t be the same without this innovative process.
During the forth flotation process, occurs the separation of several types of sulfides, carbonates and oxides, prior to further refinement. Phosphates and coal can also be purified by flotation technology.
During the process, four things happen:
- Reagent conditioning happens in order to achieve hydrophobic surface charges on the desired particles
- Collection and upward transport by bubbles in contact with air or nitrogen
- A stable froth formates on the surface of the flotation cell
- There’s a separation of the mineral laden froth from the bath
- The flotation process has three stages:
- Roughing
- Cleaning
- Scavenging
Flotation can be performed by different types of machines, in rectangular or cylindrical mechanically agitated cells or tanks, columns, a Jameson Cell or deinking flotation machines. The mechanical cells are based in a large mixer and diffuser mechanism that can be found at the bottom of the mixing tank and introduces air, providing a mixing action. The flotation columns use air spargers to generate air at the bottom of a tall column, while introducing slurry above and generating a mixing action, as well.Mechanical cells usually have a higher throughput rate, but end up producing lower quality material, while flotation columns work the other way around, with a lower throughput rate but higher quality material. The Jameson cell just combines the slurry with air in a downcomer: then, a high shear creates the turbulent conditions required for bubble particle contacting.
The process of froth flotation usually involves a series of steps:
- the preparation of appropriate particle sizes of liberated components in the mixture of solids to be separated;
- the creation of conditions favorable for the adherence of one or more components in the mixture of solids to attach to air bubbles; and
- the formation of a stable froth containing one or more components existing on the surface of the agitated mixture of particles (the pulp) which can be removed (recovered).
[keep visiting for more curious facts]
Saturday, 10 October 2015
Frying Pan
A Frying Pan That Teaches You to Cook
One of ten brilliant innovations from our 2015 Invention Awards
Inventors: | Humberto Evans, Mike Robbins, Kyle Moss, Yuan Wei |
Company: | CircuitLab Inc. |
Invention: | Pantelligent |
Development Cost To Date: | $20,000+ |
Maturity: | 5/5 |
Instead of eating at dining halls in college,
Humberto Evans cooked his own meals. His best friend, Mike Robbins, on
the other hand, could barely fry an egg. Robbins would forego the lure
of takeout only when Evans gave him step-by-step cooking instructions.
The two realized that others probably needed some culinary hand-holding
as well. With help from two other MIT engineering alumni, Kyle Moss and
Yuan Wei, they created the world’s first smart frying pan: Pantelligent.
The pan measures its temperature with heat
sensors and transmits the data via Bluetooth technology in its handle. A
smartphone app uses this information to decide when it’s time for a
recipe’s next step and then tells the user. “To cook amazing food the
way chefs do, you have to build intuition for how long to cook something
at the right temperature,” Evans says. “We take all that knowledge and
package it into our app.”
Users can choose a preprogrammed recipe, such as
chicken adobo or fried eggs, or select freestyle mode to get temperature
readings but not instructions. If a person likes the meal made in this
mode, he or she can record and share the recipe. With a tool that
de-stresses the kitchen experience, the Pantelligent team hopes more
people will skip unhealthy processed meals in favor of home-cooked ones. by Junnie Kwon.
Keep visiting.....Good Morning.....
Nano Technology-Make Every Thing Smallest
A basic definition: Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced.
In its original sense, 'nanotechnology' refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.
If we rearrange the atoms in dirt, water and air we can make potatoes.
In its original sense, 'nanotechnology' refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.
If we rearrange the atoms in dirt, water and air we can make potatoes.
When K. Eric Drexler popularized the word 'nanotechnology' in the 1980's, he was
talking about building machines on the scale of molecules, a few nanometers
wide—motors, robot arms, and even whole computers, far smaller than a
cell. Drexler spent the next ten years describing and analyzing these
incredible devices, and responding to accusations of science
fiction. Meanwhile, mundane technology was developing the ability to
build simple structures on a molecular scale. As nanotechnology became
an accepted concept, the meaning of the word shifted to encompass the
simpler kinds of nanometer-scale technology. The U.S. National Nanotechnology Initiative was created to fund this kind of nanotech: their definition includes
anything smaller than 100 nanometers with novel properties.
I want to build a billion tiny factories, models of each other, which are manufacturing simultaneously. . . The
principles of physics, as far as I can see, do not speak against the
possibility of maneuvering things atom by atom. It is not an attempt to
violate any laws; it is something, in principle, that can be done; but
in practice, it has not been done because we are too big. — Richard Feynman, Nobel Prize winner in physics.
"Nanotechnology" has become something
of a buzzword and is applied to many products and technologies that are often
largely unrelated to molecular nanotechnology. While these broader usages
encompass many valuable evolutionary improvements of existing technology, molecular
nanotechnology will open up qualitatively new and exponentially expanding opportunities
on a historically unprecedented scale. We will use the word "nanotechnology"
to mean "molecular nanotechnology".
Continue reading my Posts i will collect more information about Nano Technology which will amaze you...
Friday, 9 October 2015
Einstein's Equation that Gave Birth to the Atom Bomb
In relativity, all of the energy that moves along with an object (that is, all the energy which is present in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. Each potential and kinetic energy makes a proportional contribution to the mass. As noted above, even if a box of ideal mirrors "contains" light, then the individually massless photons still contribute to the total mass of the box, by the amount of their energy divided by c2.
In relativity, removing energy is removing mass, and for an observer in the center of mass frame, the formula m = E/c2 indicates how much mass is lost when energy is removed. In a nuclear reaction, the mass of the atoms that come out is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light which has the same relativistic mass as the difference (and also the same invariant mass in the center of mass frame of the system). In this case, the E in the formula is the energy released and removed, and the mass m is how much the mass decreases. In the same way, when any sort of energy is added to an isolated system, the increase in the mass is equal to the added energy divided by c2. For example, when water is heated it gains about 1.11×10−17 kg of mass for every joule of heat added to the water.
An object moves with different speed in different frames, depending on the motion of the observer, so the kinetic energy in both Newtonian mechanics and relativity is frame dependent. This means that the amount of relativistic energy, and therefore the amount of relativistic mass, that an object is measured to have depends on the observer. The rest mass is defined as the mass that an object has when it is not moving (or when an inertial frame is chosen such that it is not moving). The term also applies to the invariant mass of systems when the system as a whole is not "moving" (has no net momentum). The rest and invariant masses are the smallest possible value of the mass of the object or system. They also are conserved quantities, so long as the system is isolated. Because of the way they are calculated, the effects of moving observers are subtracted, so these quantities do not change with the motion of the observer.
The rest mass is almost never additive: the rest mass of an object is not the sum of the rest masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as measured by an observer that sees the center of the mass of the object to be standing still. The rest mass adds up only if the parts are standing still and do not attract or repel, so that they do not have any extra kinetic or potential energy. The other possibility is that they have a positive kinetic energy and a negative potential energy that exactly cancels.
This is the most famous equation in the history of equations. It has been printed on countless T-shirts and posters, starred in films and, even if you've never appreciated the beauty or utility of equations, you'll know this one. And you probably also know who came up with it – physicist and Nobel laureate Albert Einstein.
The ideas that led to the equation were set down by Einstein in 1905, in a paper submitted to the Annalen der Physik called "Does the Inertia of a Body Depend Upon Its Energy Content?". The relationship between energy and mass came out of another of Einstein's ideas, special relativity, which was a radical new way to relate the motions of objects in the universe.
At one level, the equation is devastatingly simple. It says that the energy (E) in a system (an atom, a person, the solar system) is equal to its total mass (m) multiplied by the square of the speed of light (c, equal to 186,000 miles per second). Like all good equations, though, its simplicity is a rabbit-hole into something profound about nature: energy and mass are not just mathematically related, they are different ways to measure the same thing. Before Einstein, scientists defined energy as the stuff that allows objects and fields to interact or move in some way – kinetic energy is associated with movement, thermal energy involves heating and electromagnetic fields contain energy that is transmitted as waves. All these types of energy can be transformed from one to another, but nothing can ever be created or destroyed.
In relativity theory, Einstein introduced mass as a new type of energy to the mix. Beforehand, the mass of something in kilograms was just a measure of how much stuff was present and how resistant it was to being moved around. In Einstein's new world, mass became a way to measure the total energy present in an object, even when it was not being heated, moved or irradiated or whatever else. Mass is just a super-concentrated form of energy and, moreover, these things can turn from one form to the other and back again. Nuclear power stations exploit this idea inside their reactors where subatomic particles, called neutrons, are fired at the nuclei of uranium atoms, which causes the uranium to split into smaller atoms. The process of fission releases energy and further neutrons that can go on to split more uranium atoms. If you made very precise measurements of all the particles before and after the process, you would find that the total mass of the latter was very slightly smaller than the former, a difference known as the "mass defect". That missing matter has been converted to energy and you can calculate how much using Einstein's equation.
Despite the tiny discrepancy in mass between the uranium atom and its products, the amount of energy released is big and the reason why is obvious when you look at the c² term in the equation – the speed of light is a huge number by itself and its square is therefore enormous. There is a lot of energy condensed into matter — 1kg of "stuff" contains around 9 x 1016 joules, if you could somehow transform all of it into energy. That is the equivalent of more than 40 megatons of TNT. More practically, it is the amount of energy that would come out of a 1 gigawatt power plant, big enough to run 10 million homes for at least three years. A 100kg person, therefore, has enough energy locked up inside them to run that many homes for 300 years.
Unlocking that energy is no easy task, however. Nuclear fission is one of several ways to release a tiny bit of an atom's mass, but most of the stuff remains in the form of familiar protons, neutrons and electrons. One way to turn an entire block of material into pure energy would be to bring it together with antimatter. Particles of matter and antimatter are the same, except for an opposite electrical charge. Bring them together, though, and they will annihilate each other into pure energy. Unfortunately, given that we don't know any natural sources of antimatter, the only way to produce it is in particle accelerators and it would take 10 million years to produce a kilogram of it.
Particle accelerators studying fundamental physics are another place where Einstein's equation becomes useful. Special relativity says that the faster something moves, the more massive it becomes. In a particle accelerator, protons are accelerated to almost the speed of light and smashed into each other. The high energy of these collisions allows the formation of new, more massive particles than protons – such as the Higgs boson – that physicists might want to study. Which particles might be formed and how much mass they have can all be calculated using Einstein's equation.
It would be nice to think that Einstein's equation became famous simply because of its fundamental importance in making us understand how different the world really is to how we perceived it a century ago. But its fame is mostly because of its association with one of the most devastating weapons produced by humans – the atomic bomb. The equation appeared in the report, prepared for the US government by physicist Henry DeWolf Smyth in 1945, on the Allied efforts to make an atomic bomb during the Manhattan project. The result of that project led to the death of hundreds of thousands of Japanese citizens in Hiroshima and Nagasaki.
The ideas that led to the equation were set down by Einstein in 1905, in a paper submitted to the Annalen der Physik called "Does the Inertia of a Body Depend Upon Its Energy Content?". The relationship between energy and mass came out of another of Einstein's ideas, special relativity, which was a radical new way to relate the motions of objects in the universe.
At one level, the equation is devastatingly simple. It says that the energy (E) in a system (an atom, a person, the solar system) is equal to its total mass (m) multiplied by the square of the speed of light (c, equal to 186,000 miles per second). Like all good equations, though, its simplicity is a rabbit-hole into something profound about nature: energy and mass are not just mathematically related, they are different ways to measure the same thing. Before Einstein, scientists defined energy as the stuff that allows objects and fields to interact or move in some way – kinetic energy is associated with movement, thermal energy involves heating and electromagnetic fields contain energy that is transmitted as waves. All these types of energy can be transformed from one to another, but nothing can ever be created or destroyed.
In relativity theory, Einstein introduced mass as a new type of energy to the mix. Beforehand, the mass of something in kilograms was just a measure of how much stuff was present and how resistant it was to being moved around. In Einstein's new world, mass became a way to measure the total energy present in an object, even when it was not being heated, moved or irradiated or whatever else. Mass is just a super-concentrated form of energy and, moreover, these things can turn from one form to the other and back again. Nuclear power stations exploit this idea inside their reactors where subatomic particles, called neutrons, are fired at the nuclei of uranium atoms, which causes the uranium to split into smaller atoms. The process of fission releases energy and further neutrons that can go on to split more uranium atoms. If you made very precise measurements of all the particles before and after the process, you would find that the total mass of the latter was very slightly smaller than the former, a difference known as the "mass defect". That missing matter has been converted to energy and you can calculate how much using Einstein's equation.
Despite the tiny discrepancy in mass between the uranium atom and its products, the amount of energy released is big and the reason why is obvious when you look at the c² term in the equation – the speed of light is a huge number by itself and its square is therefore enormous. There is a lot of energy condensed into matter — 1kg of "stuff" contains around 9 x 1016 joules, if you could somehow transform all of it into energy. That is the equivalent of more than 40 megatons of TNT. More practically, it is the amount of energy that would come out of a 1 gigawatt power plant, big enough to run 10 million homes for at least three years. A 100kg person, therefore, has enough energy locked up inside them to run that many homes for 300 years.
Unlocking that energy is no easy task, however. Nuclear fission is one of several ways to release a tiny bit of an atom's mass, but most of the stuff remains in the form of familiar protons, neutrons and electrons. One way to turn an entire block of material into pure energy would be to bring it together with antimatter. Particles of matter and antimatter are the same, except for an opposite electrical charge. Bring them together, though, and they will annihilate each other into pure energy. Unfortunately, given that we don't know any natural sources of antimatter, the only way to produce it is in particle accelerators and it would take 10 million years to produce a kilogram of it.
Particle accelerators studying fundamental physics are another place where Einstein's equation becomes useful. Special relativity says that the faster something moves, the more massive it becomes. In a particle accelerator, protons are accelerated to almost the speed of light and smashed into each other. The high energy of these collisions allows the formation of new, more massive particles than protons – such as the Higgs boson – that physicists might want to study. Which particles might be formed and how much mass they have can all be calculated using Einstein's equation.
It would be nice to think that Einstein's equation became famous simply because of its fundamental importance in making us understand how different the world really is to how we perceived it a century ago. But its fame is mostly because of its association with one of the most devastating weapons produced by humans – the atomic bomb. The equation appeared in the report, prepared for the US government by physicist Henry DeWolf Smyth in 1945, on the Allied efforts to make an atomic bomb during the Manhattan project. The result of that project led to the death of hundreds of thousands of Japanese citizens in Hiroshima and Nagasaki.
Einstein himself had encouraged the US government to fund research into atomic energy during the second world war but his own involvement in the Manhattan project was limited because of his lack of security clearances. It is unlikely that Einstein's equation was much use in designing the bomb, beyond making scientists and military leaders realise that such a thing would be theoretically possible, but the association has stuck.
Subscribe to:
Posts (Atom)