Bienvenue, Visiteur
Vous devez vous enregistrer avant de pouvoir poster.

Utilisateur/Email :
  

Mot de passe
  





Rechercher dans les forums



(Recherche avancée)

Statistiques du Forum
» Membres : 475
» Dernier membre : tong12pp05
» Sujets du forum : 166
» Messages du forum : 171

Statistiques complètes

Utilisateurs en ligne
Il y a actuellement 22 utilisateurs connectés.
» 0 Membre(s) | 22 Visiteur(s)

Derniers sujets
tong12pp05
Does a dehumidifier cool ...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:50 AM
» Réponses : 0
» Affichages : 97
tong12pp05
Latex Medical Gloves: Tim...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:48 AM
» Réponses : 0
» Affichages : 91
tong12pp05
The continuous flowmeter

Forum : Ark
Dernier message : tong12pp05
Hier, 05:47 AM
» Réponses : 0
» Affichages : 113
tong12pp05
How Does Bitcoin Mining W...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:46 AM
» Réponses : 0
» Affichages : 104
tong12pp05
Aluminum CNC Machine: Ben...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:45 AM
» Réponses : 0
» Affichages : 85
tong12pp05
The Benefits of Plant Ext...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:44 AM
» Réponses : 0
» Affichages : 104
tong12pp05
Natural raw materials for...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:42 AM
» Réponses : 0
» Affichages : 99
tong12pp05
Choosing the Ideal Shower...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:41 AM
» Réponses : 0
» Affichages : 94
tong12pp05
Ultraviolet Light Fights ...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:40 AM
» Réponses : 0
» Affichages : 79
tong12pp05
Quality of Surgical Instr...

Forum : Ark
Dernier message : tong12pp05
Hier, 05:39 AM
» Réponses : 0
» Affichages : 91

 
  Does a dehumidifier cool a room?
Posté par : tong12pp05 - Hier, 05:50 AM - Forum : Ark - Pas de réponse

Does a dehumidifier cool a room?

    A new humid air humidifier was designed and built. It is composed with two identical stages. The condensation is achieved on vertical tubes. Many configurations are tested. A thermodynamic model has been established. This model predicted the performances of each stage. The results showed that the flow rate of pure water produced by dehumidification and the exchanged heat power increase with increasing mass flow rates of dry air and cooling water, temperature and absolute humidity of inlet moist air in the dehumidifier. The high quantity of water vapor in air makes a thick film of condensate on cooling tubes. This reduces the flow rate of condensate and thermal power exchanged. So, the use of multistage condenser as dehumidifier is must but not sufficient for optimum efficiency.


    Dehumidifiers remove excess moisture from a room, but does a dehumidifier cool a room? We explore the answer here.


    Dehumidifiers are an increasingly popular choice, but does a dehumidifier cool a room? Not only can reducing humidity make your room more pleasant, but it can also help protect the people in your home from a host of health issues, including respiratory problems and allergies. Dehumidifiers can also stop the growth of mold, which can be damaging and dangerous to health.


    Understanding how dehumidifiers work and what they do can help us answer the question; does a industrial dehumidifier cool a room? While dehumidifiers aren’t designed to reduce a room’s temperature, removing humidity can make it feel cooler and more comfortable. They can even have some surprising benefits, with dehumidifiers helping with snoring and other issues. 

    Dehumidifiers remove excess moisture in the air by drawing it in from the outside and cooling it. A refrigerant (or compressor) dehumidifier draws in air. This then passes through a cold coil that causes the moisture within the air to condense, becoming water that subsequently drops to a reservoir at the bottom of the machine. 


    Alternatively, a desiccant dehumidifier uses an absorbent material, such as silica gel or Zeolite, typically formed into a rotor. Air is pushed through the rotor, where the desiccant material removes moisture from the air. While they remove humidity, neither type of dehumidifier won’t noticeably affect the temperature of a room. However, removing humidity may make the room feel cooler. 


    Dehumidifiers aren’t the same as air conditioning units, which are designed to generate cold air that’s pumped into a room in order to lower the temperature. In contrast, dehumidifiers remove the moisture from the existing air within a space.


    However, while not specifically designed to cool a room, air purifier dehumidifier still do an essential job within the home. A 2018 study from the International Journal of Hygiene and Environmental Health found that humid air is associated with a whole range of health issues. No matter what causes dampness in a house, excessively high humidity can lead to the growth of damp and mold, which can be dangerous to health, according to the Center for Disease Control and Prevention. 


    Meanwhile, if the humidity level in a room is too low, researchers from Environmental Health Perspectives have established that it can cause a range of health problems, including skin irritation. Meanwhile, scientists from Biological Rhythm Research have also confirmed what most of us already know, that high humidity can also have a "deleterious effect on sleep". High humidity can make a room feel uncomfortable, according to this research from Temperature Medical Physiology and Beyond, as the sweat produced by our bodies to regulate heat doesn’t evaporate as quickly. By removing this humidity, the room can feel cooler and more comfortable. As a result, it's easier to breathe and sleep. 

    ? Read more: Is condensation on windows bad?

    So, do dehumidifiers cool a room? Sort of. High humidity causes heavy, muggy air that can feel clammy and uncomfortable. Stabilizing the relative humidity at an appropriate level (between 30% and 50%) can help a room feel drier, cooler, and more comfortable, according to research from Building and Environment. Dehumidifiers are designed to remove humid air from a hot room, which can have a cooling effect. 


    While a dehumidifier can make a room feel less warm, it's not its primary purpose. If your home is uncomfortably warm, we would recommend investing in an air-conditioning unit instead.

    Just as dehumidifiers don’t technically cool a room, but can have a cooling effect thanks to the reduction in humidity, the same concept applies to an entire house. The key is to invest in a unit that has the capacity to cover a large area. The capacity of your dehumidifier is measured in pints per 24 hours and the larger the dehumidifier, the more water it will capture from the atmosphere. 


    When it comes to choosing a suitable air dehumidifier, there are two key aspects to consider. The first is the size of the space and the second is how damp or humid the area is. 


    The first step in finding a dehumidifier large enough to cool a house is to measure your home and calculate its total square footage. Once you've got an approximate measurement, you can use this handy calculator provided by Energy Star to help you select an appropriately sized dehumidifier. Approved by the U.S. Environmental Protection Agency and the U.S. Department of Energy, this calculator will help you estimate how humid and damp your home is so that you can find a suitably sized dehumidifier.


    Using the calculator, we can see that a 30-pint dehumidifier is suitable for an average-sized three-bed family home (2,500 feet). Meanwhile, a ten-12 pint dehumidifier will be enough to dehumidify a single room and a 20-pint dehumidifier powerful enough to dehumidify a typical one-bedroom flat (1,500 feet).


    Like any other electrical appliance, the more you use a dehumidifier, the more money it will cost you. Both refrigeration and desiccant commercial dehumidifier aren't hugely energy efficient, say researchers from Earth and Environmental Science. They conclude that, "existing traditional dehumidification technology has high energy consumption and poor reliability".


    The safest and most comfortable humidity level is around 50%, a study from Environmental Health Perspectives found. Meanwhile, the United States Environmental Protection Agency recommends a humidity level of between 30% - 50%. Choosing the best humidity level for you will depend on what's most comfortable for you and those you share your home with, but be aware that lower levels of humidity (and subsequently higher usage of your dehumidifier) will impact energy usage. 


    When setting the humidity level for your room or home, be aware of the impact on your bills. The lower the humidity level setting and the warmer the room, the more work you're asking the dehumidifier to do – and the more energy it will use. 


    In the world of dehumidifiers, size matters. The larger and more efficient your dehumidifier is, the more significant an impact it will have in your home. Removing humidity from the air means your air conditioning unit runs more efficiently, scientists from the Federal University of Technology, Akure have found. As a result, an air conditioning unit shouldn’t have to work as hard to make a room feel more comfortable, which should help to reduce energy costs. 


    To ensure that a dehumidifier is working as efficiently as possible, you should close all doors and windows and situate the dehumidifier in the center of a room with a reasonable distance surrounding the machine. It’s also important to regularly empty the reservoir and clean the filter, according to the manufacturer's instructions. Learn how to clean a dehumidifier in our guide.


    The longer you run your dehumidifier at a time, the more expensive it will be. Thankfully, many dehumidifiers have the option to operate until they reach the pre-set humidity level and then shut off. It's easier and more efficient for a machine to maintain a humidity level rather than work hard to lower a high humidity level. If you're expecting the day to become humid, we’d recommend starting the machine early in the day to save energy in the long run. 


    Household air conditioners (ACs) with cooling capacities of less than 6?kW are popular in the tropics but are highly energy intensive because of the humid ambient condition. The hybrid dehumidifier–AC concept was studied here to improve the energy efficiency of such systems. Feasibility of the concept was investigated by looking at the design complexity, expected performance, and economic aspects. From the design analysis, it was concluded that small desiccant wheels are most practical for portable and window ACs, while no dehumidification concept was found ideal for split-type room ACs. Performance of the system was benchmarked with data from a 6.4 m2 room under tropical ambient conditions with air change rates of 1–4 changes per hour. It was found that an average load reduction of up to 8.1% was obtainable. The corresponding potential power saving was 9.4%. The performance data were then used for the economic analysis. It was found that the hybrid system is financially attractive mostly when cooling capacity, usage rate, and electricity price are high. Furthermore, the system should have a cooling capacity of at least 4?kW and 4 air changes per hour to be financially justifiable, particularly in places with low electricity prices.


  Latex Medical Gloves: Time for a Reappraisal
Posté par : tong12pp05 - Hier, 05:48 AM - Forum : Ark - Pas de réponse

Latex Medical Gloves: Time for a Reappraisal

    Many hospitals have implemented policies to restrict or ban the use of devices made of natural rubber latex (NRL) in healthcare as precautionary measures against the perceived risk of NRL allergy. Changes in glove technology, progress in measuring the specific allergenic potential of gloves and a dramatic decrease in the prevalence of NRL allergies after interventions and education prompted us to revisit the basis for justifiable glove selection policies. The published Anglophone literature from 1990 to 2010 was reviewed for original articles and reviews dealing with the barrier and performance properties of NRL and synthetic gloves and the role of glove powder. The review shows that NRL medical gloves, when compared with synthetic gloves, tend to be stronger, more flexible and better accepted by clinicians. The introduction of powder-free gloves has been associated with reductions in protein content and associated allergies. Recently, new methods to quantify clinically relevant NRL allergens have enabled the identification of gloves with low allergenic potential. The use of low-protein, low-allergenic, powder-free gloves is associated with a significant decrease in the prevalence of type I allergic reactions to NRL among healthcare workers. Given the excellent barrier properties and operating characteristics, dramatically reduced incidences of allergic reactions, availability of specific tests for selection of low-allergen gloves, competitive costs and low environmental impact, the use of NRL gloves within the hospital environment warrants reappraisal.


    In recent years, many hospitals and health care settings around the world have decided to restrict the use or totally ban all natural rubber latex (NRL) devices as precautionary measures to NRL allergy threats. As is widely acknowledged, type I or IgE-mediated NRL allergy has, for several years, been one of the most significant occupational health problems [for reviews, see [1,2,3]]. However, it is now also acknowledged that new cases of NRL allergy have reduced significantly and sometimes virtually disappeared in countries and hospital regions where health authorities have required the use of low-allergen/low-protein, non-powdered protective medical gloves. Thus, policies which ban the use of NRL devices may be an overreaction that can lead to unexpected compromises in the primary purpose of using protective gloves, that is, providing a competent barrier to protect against infections for both healthcare professionals and the patients [4,5]. These controversies prompted us to revisit the basis for justifiable glove selection policies.


    As is well known, NRL has been used as a material for the production of gloves for almost a century. Throughout the 1990s there were increasing concerns about transmittable diseases, particularly HIV infection and hepatitis, which resulted in a dramatic increase in the use of NRL gloves. Escalating glove use in the 1990s was associated with the rise in reports of allergic reactions to NRL gloves among healthcare workers [1,6,7,8,9,10]. The increased demand for gloves led to an upsurge in glove production, especially in Malaysia. Between 1987 and 1989 the Malaysian Rubber Development Board received over 400 applications to form glove companies where previously only 25 existed [11]. Early on in the history of NRL allergy, some authors [12,13] suggested that the increased production in response to the sudden upsurge for latex gloves often led to inadequate leaching to reduce protein levels.



    The healthcare community requires medical gloves, both for examination and surgery, in order to provide a barrier that prevents transmission of micro-organisms to and from patients [4]. Many factors are involved in the choice of materials for the production of medical gloves, which relate to both the protective effect as well as ease and comfort of use [14,15]. For a large number of healthcare practitioners, NRL continues to be the glove material of choice [15,16].



    The negative aspect of NRL glove use, linked to the allergy problems, has gained substantial media coverage, in addition to the publication of a significant number of scientific papers. In reaction to the media and scientific coverage, and to rising compensation claims, many hospitals around the world have implemented new latex allergy and glove policies, resulting in the substitution of NRL gloves with synthetic gloves in certain areas, on specific patients or by sensitized staff. More recently, a number of high profile hospitals, exemplified by Johns Hopkins Hospital in Baltimore, Md., USA, and the Cleveland Clinic’s network of nine hospitals in Cleveland, Ohio, USA, have gone ‘latex free’ [5]. As a result, a small but increasing number of medical practitioners only have access to gloves made from synthetic materials. Such policies require full consideration of all of the factors involved, including also glove functionality as well as costs incurred, both directly and indirectly on the environment.


    Following recognition of the problem of NRL sensitivity in the late 1980s and early 1990s, many changes were made in the production processes for NRL gloves and in the implementation of latex-sensitivity protocols in hospitals. In recent years, these changes have resulted in a significant reduction in the prevalence rates of allergic reactions to NRL. Experience from the Mayo Clinic, Rochester, Minn., USA [17] and Finland [18] showed that the change by an institute or hospital district specifically to low-allergen gloves or to gloves with undetectable allergen contents, led to a decrease in the incidence of new cases of occupational allergy. In Germany, Allmers et al. [19] showed that a combination of educating physicians and administrators, together with regulations requiring that healthcare facilities only purchase low-protein, powder-free NRL gloves, can even lead to prevention of sensitization.


    This review compares the key properties of gloves made of NRL and synthetic source materials and examines glove barrier and functional characteristics, recent changes in glove technology, developments in NRL allergen measurement methodology as well as priority given by clinicians and other health care workers. The Anglophone literature limited largely to the period from 1990 to 2010 was surveyed for original research reports and review articles addressing also specifically the evidence for the consequent reductions in risk of allergic reactions and changes in the epidemiology of NRL allergies.


    Glove Source Materials

    Many plants produce liquid latex, but the natural material, NRL, used in rubber manufacture is almost exclusively obtained from the Hevea brasiliensis tree. It contains the rubber polymer, cis-poly-isoprene, as well as varying amounts of a large number of different proteins [20,21,22]. Various chemicals, such as accelerators, activators, anti-oxidants and vulcanizing agents, are used in the manufacture of medical gloves [[23]; for review, see [24]] but a large proportion of these chemicals are then leached out in the further stages of production, through processes such as ‘wet-gel leaching’. These leaching processes also remove the majority of the water-soluble proteins found in NRL [24].


    The raw materials for synthetic glove manufacture include vinyl (polyvinyl chloride), nitrile (acetonitrile butadiene), neoprene, polyisoprene, polychloroprene, polyurethane and polyethylene, which are generally derived from oil chemistry. Nitrile is very similar in its polymer chemical structure to NRL and, in this respect, may be considered as synthetic latex.



    Deproteinised latex, being composed of enzyme-treated NRL, has also been used as the source material for nitrile gloves. We are not aware of published reports in which gloves made of deproteinised NRL have been compared with conventional NRL gloves, especially with respect to their allergenic properties, although there are reports that NRL-allergic patients can tolerate condoms made from this material [26].


    Recently, liquid latex from a North American and Mexican desert shrub, Parthenium argentatum, commonly known as Guayule, has been introduced as source material for gloves [27]. The obvious advantage of Guayule is that it is not botanically related to H. brasiliensis and, for the time being, no reports about type I allergies to these gloves have been reported.


    Glove Properties


    Barrier Properties


    The primary function of gloves is to provide a competent barrier to protect against infections for both healthcare professionals and the patients. Gloves used by healthcare workers need to be single use for each patient contact and treatment, although it is recommended that prolonged and indiscriminate use should be avoided to minimize the risk of sensitization [4]. They are required in various situations such as invasive procedures and contact with non-intact skin, mucous membranes or sterile sites. As such, leakage must be minimal, even when apparently undamaged, and various standards have been developed in order that all gloves perform adequately regardless of material [4]. They should be easy to put on, comfortable to wear and provide adequate, durable protection [15].


   



    The durability of barrier protection has been examined in a number of studies and it has been shown that NRL gloves provide lower rates of perforation and lower viral leakage rates than vinyl gloves [24,28,29]. In a study that examined gloves after manipulation to simulate in-use conditions, the failure rate was 0–4% for NRL, 1–3% for nitrile and 12–61% for vinyl gloves, indicating better barrier protection by NRL and nitrile gloves, compared to vinyl [29]. In another study where gloves were stressed according to a designated protocol before examining for leakage properties, failure rates were 2.2% for NRL and 1.3% for nitrile gloves, which were again better than for vinyl or copolymer (8.2% for each) [30]. Barrier integrity following an abrasion test demonstrated that NRL gloves were better than vinyl, although not as good as either nitrile or neoprene [31].


   



    A study in the USA in 2004 performed post-usage examination and testing of surgeons’ gloves after routine surgical procedures. The results revealed higher after-use defects for non-latex compared with latex disposable gloves [32]. Compared with NRL gloves, the odds ratio for defects was 1.39 (95% confidence interval 1.12–1.73) for neoprene and 1.90 (95% confidence interval 1.15–3.13) for nitrile gloves. In addition, the surgeons reported significantly greater satisfaction with regard to factors such as quality, safety and durability for latex compared with latex examination gloves. These results should probably be treated with caution because the surgeons had never used non-latex gloves before for routine surgery (acknowledged by the authors as a possible bias) and only 215 nitrile gloves were used compared to 2,647 latex and 3,624 neoprene gloves. In addition, the main difference in the study was in visible leaks, with no significant difference in water leaks, which may be explained by the low tear propagation strength of nitrile/neoprene. Similar differences between neoprene and NRL have been demonstrated in another study [33] where it was noted that punctures in neoprene gloves were detected more readily by surgeons than punctures in NRL gloves.


   



    A recent study [34], comparing synthetic polyisoprene and NRL gloves during heavy orthopaedic surgery with high risk of perforations, revealed a significantly higher perforation rate in latex-free gloves (80.0%) compared with NRL gloves (34.4%). Again, the study was poorly controlled and open to criticism because glove usage was not randomized, being based on two surgical teams in two different hospitals, one using NRL, the other using polyisoprene. It is, however, interesting that these three studies appear to detect highly significant differences in perforation rates between NRL and non-NRL [32,33,34].


   



    Fit and Comfort


    According to the Scientific Committee on Medicinal Products and Medical Devices of the European Commission [35], nitrile gloves are usually of lower tensile strength than surgical gloves, but their elastic modulus, or stiffness, is somewhat higher. In addition, nitrile has a higher permanent set than latex, meaning that once stretched it does not fully recover. Thus, nitrile gloves tend to be designed to fit more loosely than latex, and the combination of these properties may affect the users’ tactile sensation and delicacy of touch. This has been confirmed by a study [36] where participants noted that nitrile gloves that fitted their fingers were too narrow for their hands and gloves that fitted their hands were too large for their fingers. During this research, it was confirmed that there are detectable differences between nitrile and latex, where a pegboard test demonstrated an 8.6% increase in fine finger dexterity for latex over nitrile, although no differences related to gross dexterity. Whilst it is not clear at present what the practical effects of this research mean, it does appear that the stiffness of nitrile may affect user dexterity. The study also questioned users about their preferred material, with 67% preferring latex and 21% preferring nitrile.


   



    Thus, a variety of factors, including glove strength, abrasive resistivity, dexterity and comfort, should be taken into account when selecting gloves for specific needs.


   



    Enhanced Barrier Performance by Means of Double Gloving


    When carrying out operations, perforations in gloves often go unnoticed and there is frequently a risk of contamination and exposure to blood-borne pathogens [37,38].


   



    As a result, double gloving has been routinely used by a proportion of surgeons since the early 1990s. However, double gloving is reported to be less common in the UK, Europe and the USA than in other countries, except in the areas of orthopaedics and maxillofacial surgery [38,39]. Double gloving is generally carried out with two layers of NRL gloves [38], although sometimes the inner glove can be synthetic, which may reduce the risk of allergic reactions.


  The continuous flowmeter
Posté par : tong12pp05 - Hier, 05:47 AM - Forum : Ark - Pas de réponse

The continuous flowmeter

    The continuous flowmeter


    Flow meters are classed as volumetric or inferential, the latter term referring to meters that determine velocity from other variables such as pressure differences across a device such as an orifice plate. There is a large variety of flow measurement device, using numerous physical principles. Full discussion of the whole range of flow measurement device is out of the scope of this book but the reader will find a comprehensive reference in the Flow Measurement Handbook (Baker, 2000). Table 18.4 gives typical information on some of the flow meters usually encountered in the water industry.


    Mass magnetic flowmeter such as the Coriolis meter provide a more sophisticated metering device. Sometimes configured in a distinctive U-tube shape, an internal tube is set oscillating using an electric current supplied to coils at either end of the tube. The flow of liquid through the tube sets up a twisting force on the inner tube due to the naturally occurring Coriolis Effect. Sensors fitted along the length of the tube detect and measure the twisting force, which is a function of the mass flow rate; the processed data provides production and fluid density data.


   



    The principles of orifice and venturi meters are discussed in Section 14.16. Two other kinds of inferential (or momentum) meter are the Dall tube and the V cone venturi. In both, flow accelerates through a constriction and leads to a pressure drop. The pressure difference is measured in the Dall tube and the V cone venturi as an indicator of velocity (and so flow) in the same way as for an orifice. The V cone venturi design is claimed to have a turn-down ratio of 25:1 and to be less affected by conditions upstream and downstream and can be fitted into shorter lengths of straight pipe than is recommended for other meter types. Further types of momentum meter are indicated in Table 18.5.


   



    An ultrasonic flow meter as shown in Fig. 16.11 measures the velocity of a fluid to calculate volume flow. The vortex flowmeter can measure the average velocity along the path of an emitted beam of ultrasound by averaging the difference in measured transit time between the pulses of ultrasound propagating into and against the direction of the flow or by measuring the frequency shift from the Doppler effect. Ultrasonic flow meters are affected by the acoustic properties of the fluid and can be impacted by temperature, density, viscosity and suspended particulates. They are often inexpensive to use and maintain because they do not use moving parts, unlike mechanical flow meters.


   



    Insertion probe flow meters are installed for temporary measurement of flow for consumption surveys or for distribution networks analyses. These instruments are either the turbine or electromagnetic (EM) type, the latter becoming more common. Both are inserted into the pipe where flow measurement is required. The turbine type uses a small rotating vane at the end of a probe to record flow velocity. The vane is susceptible to damage, in which case the instrument has to be returned to the manufacturer for repair and recalibration. The turbine meter is inserted through a 40 mm diameter tapping in the pipe which has to be of at least 200 mm diameter. The EM probe (Plate 30©) uses an electromagnet at its end to apply a magnetic field to the water. Electrodes either side of the probe pick up the induced EMF in the water which is proportional to the velocity past the electrodes. The tapping for an EM insertion probe is 20 mm diameter and can usually be installed in pipes of diameter 150 mm and greater. EM probes are made up to 1 m long; therefore, they cannot be used for pipes of diameter greater than 900 mm and are restricted to flow with velocity less than about 1.75 to 2.0 m/s due to the flexibility of the probe.


   



    Insertion probes measure the velocity at the position of the measuring device. This can be at the pipe centre line or at defined points along the diameter. The measured velocity has to be converted to mean pipe velocity of flow by relating the measured value to the average velocity across the whole pipe. For this a velocity profile for the pipe is used, determined by using the same instrument to record velocities at set points across the diameter from crown to invert. The recorded measurements are corrected to take account of the disturbance caused by the instrument itself (increased local velocity). The disturbance coefficients are unique to each instrument and are provided by its manufacturer. For the conditions usually encountered the ratio of the mean velocity to the centerline velocity is 0.83 but can range from 0.7 to 1.0. Values differing widely from 0.83 should be viewed with caution and the cause investigated. However, satisfactory results should be obtained if the internal diameter is measured accurately and if the number of flow profile readings is sufficient—five for pipes of DN 150, nine for DN 300 and 13 for larger pipes. The measurements should be repeated at least three times to ensure the ratio is consistent and repeatable; the flow must be relatively consistent during each profile run. In practice poor field conditions often make precise measurement difficult so that several attempts may be necessary. Once the profile is established satisfactorily the instrument is set at the pipe centre line and the data logger is attached. In pipes in poor condition the velocity profile changes with flow and can render very inaccurate measurements of flows significantly different from that at which the velocity profile was established.


   



    Making tappings and installing insertion probes pose a risk to water quality. Although such risks can be managed, ultrasonic strap-on mass flowmetter are being used increasingly as an alternative and avoid the tedious exercise of velocity profiling. Versions of these meters can be installed on all sizes and materials of pipe used in distribution systems.


   



    For an orifice meter Equation (14.24) equally applies. However, C varies much more, depending on the velocity and the ratio d/D. If the latter is in the typical range of 0.4–0.6 and for pipe diameters greater than 200 mm, the value of C will be in the range 0.60–0.61 for the usual velocities experienced in a pipeline.


   



    The accuracy of measurement of both venturi and orifice meters depends on the lateral flow distribution through the device and can be severely affected by flow disturbances created by fittings in a pipe system. Detailed conditions for accurate measurement are laid down in BS EN ISO 5167-1:2003. These can generally be met by ensuring that for d/D ratios not exceeding 0.6, there are at least 20 diameters of straight pipe without a fitting upstream of the meter and 7 diameters of straight pipe downstream. The venturi is designed to minimize the headloss and this can be made very small with a well-designed expansion downstream of the throat providing good recovery of pressure head. The headloss through an orifice is substantially higher because of the sudden expansion of the diameter downstream. If headloss is an important consideration a venturi meter, such as the illustrated Dall tube, should be considered but venturis are more expensive and require greater space than an orifice meter.


   



    Subsea manifolds have been used in the development of oil and gas fields to simplify the subsea system, minimize the use of subsea pipelines and risers, and optimize the fluid flow of production in the system. The manifold is an arrangement of piping and/or valves designed to combine, distribute, control, and often monitor fluid flow. Subsea manifolds are installed on the seabed within an array of wells to gather product or to inject water or gas into wells as shown in Figure 19-1. There are numerous types of manifolds, ranging from a simple pipeline end manifold (PLEM/PLET) to large structures such as an entire subsea process system. A PLEM is one of the most common manifolds in this range and it is detailed in the next section separately because it is directly connected to pipelines and its installation considerations are key factors in the design.


   



    A subsea manifold system is structurally independent from the wells. The wells and pipelines are connected to the manifold by jumpers. The subsea manifold system is mainly comprised of a manifold and a foundation. The manifold support structure is an interface between the manifold and foundation. It provides necessary facilities to guide, level, and orient the manifold relative to the foundation. The connection between the manifold and the manifold supporting structure allows the manifold to retrieve and reinstall with sufficient accuracy so as to allow for the subsequent reuse of the production and well jumpers.


   



    A manifold is a structural frame with piping, valves, control module, pigging loop, flow meters, etc. The foundation provides structural support for the manifold. It may be either a mudmat with skirt or a pile foundation, depending on seabed soil conditions and manifold size.


   



    Any obstruction inserted into a duct or pipe that creates a measurable pressure difference can be used as a density meter. The three basic standardized flow measurement devices presented above are perhaps more suitable for laboratory work than installation as permanent ductwork instruments in ventilation applications. They are sensitive to flow disturbances, relatively expensive, require considerable space, and have a narrow measurement range and a high permanent pressure loss. For these reasons, numerous attempts have been made to develop instruments without these drawbacks. Some of them, like the Dall tube,47 which is a modification of the venturi, have even been standard instruments. Several other solutions based on plates, rings, or wing-type obstructions are commercially available. This wide variety of devices is not covered here. For further information, the reader should contact the manufacturers of such instruments.


  How Does Bitcoin Mining Work?
Posté par : tong12pp05 - Hier, 05:46 AM - Forum : Ark - Pas de réponse

How Does Bitcoin Mining Work?

    Bitcoin mining is the process by which new bitcoins are entered into circulation; it is also the way that new transactions are confirmed by the network and a critical component of the maintenance and development of the blockchain ledger. "Mining" is performed using sophisticated hardware that solves an extremely complex computational math problem. The first computer to find the solution to the problem is awarded the next block of bitcoins and the process begins again.


    However, before you invest the time and equipment, read this explainer to see whether asic miner is really for you. We will focus primarily on Bitcoin (throughout, we'll use "Bitcoin" when referring to the network or the cryptocurrency as a concept, and "bitcoin" when we're referring to a quantity of individual tokens).


    The primary draw for many mining is the prospect of being rewarded with Bitcoin. That said, you certainly don't have to be a miner to own cryptocurrency tokens. You can also buy cryptocurrencies using fiat currency; you can trade it on an exchange like Bitstamp using another crypto (as an example, using Ethereum or NEO to buy Bitcoin); you even can earn it by shopping, publishing blog posts on platforms that pay users in cryptocurrency, or even set up interest-earning crypto accounts.


   



    The Bitcoin reward that miners receive is an incentive that motivates people to assist in the primary purpose of antminer: to legitimize and monitor Bitcoin transactions, ensuring their validity. Because these responsibilities are spread among many users all over the world, Bitcoin is a "decentralized" cryptocurrency, or one that does not rely on any central authority like a central bank or government to oversee its regulation.



   



    Mining to Prevent Double Spend


    Miners are getting paid for their work as auditors. They are doing the work of verifying the legitimacy of Bitcoin transactions. This convention is meant to keep Bitcoin users honest and was conceived by Bitcoin's founder, Satoshi Nakamoto.1 By verifying transactions, miners are helping to prevent the "double-spending problem." 


   



    Double spending is a scenario in which a Bitcoin owner illicitly spends the same bitcoin twice. With physical currency, this isn't an issue: once you hand someone a $20 bill to buy a bottle of vodka, you no longer have it, so there's no danger you could use that same $20 bill to buy lotto tickets next door. While there is the possibility of counterfeit cash being made, it is not exactly the same as literally spending the same dollar twice. With digital currency, however, as the Investopedia dictionary explains, "there is a risk that the holder could make a copy of the digital token and send it to a merchant or another party while retaining the original."


   



    Let's say you had one legitimate $20 bill and one counterfeit of that same $20. If you were to try to spend both the real bill and the fake one, someone that took the trouble of looking at both of the bills' serial numbers would see that they were the same number, and thus one of them had to be false. What a Bitcoin miner does is analogous to that—they check transactions to make sure that users have not illegitimately tried to spend the same bitcoin twice. This isn't a perfect analogy—we'll explain in more detail below.


   



     Only 1 megabyte of transaction data can fit into a single bitcoin block. The 1 MB limit was set by Satoshi Nakamoto, and this has become a matter of controversy as some miners believe the block size should be increased to accommodate more data, which would effectively mean that the bitcoin network could process and verify transactions more quickly.


    "So after all that work spent whatsminer, I might still not get any bitcoin for it?"


    That is correct. To earn bitcoins, you need to be the first miner to arrive at the right answer, or closest answer, to a numeric problem. This process is also known as proof of work (PoW).


   



    "What do you mean, 'the right answer to a numeric problem'?"


    The good news: No advanced math or computation is really involved. You may have heard that miners are solving difficult mathematical problems—that's true but not because the math itself is hard. What they're actually doing is trying to be the first miner to come up with a 64-digit hexadecimal number (a "hash&quotWink that is less than or equal to the target hash. It's basically guesswork.1


   



    The bad news: It's a matter of guesswork or randomness, but with the total number of possible guesses for each of these problems being on the order of trillions, it's incredibly arduous work. And the number of possible solutions only increases the more miners that join the mining network (known as the mining difficulty). In order to solve a problem first, miners need a lot of computing power. To mine successfully, you need to have a high "hash rate," which is measured in terms gigahashes per second (GH/s) and terahashes per second (TH/s).


   



     If you want to estimate how much bitcoin you could mine with your avalonminer rig's hash rate, the site Cryptocompare offers a helpful calculator. Other web resources offer similar tools.


    Mining and Bitcoin Circulation


    In addition to lining the pockets of miners and supporting the Bitcoin ecosystem, mining serves another vital purpose: It is the only way to release new cryptocurrency into circulation. In other words, miners are basically "minting" currency. For example, as of September 2021, there were around 18.82 million bitcoins in circulation, out of an ultimate total of 21 million.2


   



    Aside from the coins minted via the genesis block (the very first block, which was created by founder Satoshi Nakamoto), every single one of those bitcoins came into being because of miners. In the absence of miners, Bitcoin as a network would still exist and be usable, but there would never be any additional bitcoin. However, because the rate of bitcoin "mined" is reduced over time, the final bitcoin won't be circulated until around the year 2140. This does not mean that transactions will cease to be verified. Miners will continue to verify transactions and will be paid in fees for doing so in order to keep the integrity of Bitcoin's network.3


   



    Aside from the short-term Bitcoin payoff, being a coin miner can give you "voting" power when changes are proposed in the Bitcoin network protocol. This is known as a BIP (Bitcoin Improvement Protocol). In other words, miners have some degree of influence on the decision-making process on such matters as forking.


   



    How Much a Miner Earns


    The rewards for Bitcoin mining are reduced by half roughly every four years.1 When bitcoin was first mined in 2009, innosilicon miner one block would earn you 50 BTC. In 2012, this was halved to 25 BTC. By 2016, this was halved again to 12.5 BTC. On May 11, 2020, the reward halved again to 6.25 BTC.


   



    In September of 2021, the price of Bitcoin was about $45,000 per bitcoin, which means you'd have earned $281,250 (6.25 x 45,000) for completing a block.4 Not a bad incentive to solve that complex hash problem detailed above, it might seem.


   



    Although early on in Bitcoin's history individuals may have been able to compete for blocks with a regular at-home personal computer, this is no longer the case. The reason for this is that the difficulty of mining Bitcoin changes over time.


   



    In order to ensure the smooth functioning of the blockchain and its ability to process and verify transactions, the Bitcoin network aims to have one block produced every 10 minutes or so. However, if there are one million mining rigs competing to solve the hash problem, they'll likely reach a solution faster than a scenario in which 10 mining rigs are working on the same problem. For that reason, Bitcoin is designed to evaluate and adjust the difficulty of mining every 2,016 blocks, or roughly every two weeks.1


   



    When there is more computing power collectively working to mine for bitcoins, the difficulty level of mining increases in order to keep block production at a stable rate. Less computing power means the difficulty level decreases. At today's network size, a personal computer mining for bitcoin will almost certainly find nothing.


   



    All of this is to say that, in order to mine competitively, miners must now invest in powerful computer equipment like a GPU (graphics processing unit) or, more realistically, an application-specific integrated circuit (ASIC). These can run from $500 to the tens of thousands. Some miners—particularly Ethereum miners—buy individual graphics cards (GPUs) as a low-cost way to cobble together mining operations.


   



    An Analogy


    Say I tell three friends that I'm thinking of a number between one and 100, and I write that number on a piece of paper and seal it in an envelope. My friends don't have to guess the exact number; they just have to be the first person to guess any number that is less than or equal to the number I am thinking of. And there is no limit to how many guesses they get.


   



    Let's say I'm thinking of the number 19. If Friend A guesses 21, they lose because 21>19. If Friend B guesses 16 and Friend C guesses 12, then they've both theoretically arrived at viable answers, because of 16 < 19 and 12 < 19. There is no &quot;extra credit&quot; for Friend B, even though B's answer was closer to the target answer of 19. Now imagine that I pose the &quot;guess what number I'm thinking of&quot; question, but I'm not asking just three friends, and I'm not thinking of a number between 1 and 100. Rather, I'm asking millions of would-be miners and I'm thinking of a 64-digit hexadecimal number. Now you see that it's going to be extremely hard to guess the right answer.


   



    If B and C both answer simultaneously, then the analogy breaks down.


   



    In Bitcoin terms, simultaneous answers occur frequently, but at the end of the day, there can only be one winning answer. When multiple simultaneous answers are presented that are equal to or less than the target number, the Bitcoin network will decide by a simple majority—51%—which miner to honor.


   



    Typically, it is the miner who has done the most work or, in other words, the one that verifies the most transactions. The losing block then becomes an &quot;orphan block.&quot; Orphan blocks are those that are not added to the blockchain. Miners who successfully solve the hash problem but who haven't verified the most transactions are not rewarded with bitcoin.


   



    Remember that analogy, where the number 19 was written on a piece of paper and put it in a sealed envelope? In Bitcoin mining terms, that metaphorical undisclosed number in the envelope is called the target hash.


   



    What miners are doing with those huge computers and dozens of cooling fans is guessing at the target hash. Miners make these guesses by randomly generating as many &quot;nonces&quot; as possible, as fast as possible. A nonce is short for &quot;number only used once,&quot; and the nonce is the key to generating these 64-bit hexadecimal numbers I keep talking about. In Bitcoin mining, a nonce is 32 bits in size—much smaller than the hash, which is 256 bits. The first miner whose nonce generates a hash that is less than or equal to the target hash is awarded credit for completing that block and is awarded the spoils of 6.25 BTC.


   



    In theory, you could achieve the same goal by rolling a 16-sided die 64 times to arrive at random numbers, but why on earth would you want to do that?


   



    The screenshot below, taken from the site Blockchain.info, might help you put all this information together at a glance. You are looking at a summary of everything that happened when block #490163 was mined. The nonce that generated the &quot;winning&quot; hash was 731511405. The target hash is shown on top. The term &quot;Relayed by Antpool&quot; refers to the fact that this particular block was completed by AntPool, one of the more successful mining pools (more about mining pools below).


   



    As you see here, their contribution to the Bitcoin community is that they confirmed 1768 transactions for this block. If you really want to see all 1768 of those transactions for this block, go to this page and scroll down to the heading &quot;Transactions.&quot;


   



    To find such a hash value, you have to get a fast mining rig, or, more realistically, join a mining pool—a group of coin miners who combine their computing power and split the mined Bitcoin. Mining pools are comparable to those Powerball clubs whose members buy lottery tickets en masse and agree to share any winnings. A disproportionately large number of blocks are mined by pools rather than by individual miners.


   



    In other words, it's literally just a numbers game. You cannot guess the pattern or make a prediction based on previous target hashes. At today's difficulty levels, the odds of finding the winning value for a single hash is one in the tens of trillions.5 Not great odds if you're working on your own, even with a tremendously powerful mining rig.


   



    Not only do miners have to factor in the costs associated with expensive equipment necessary to stand a chance of solving a hash problem. They must also consider the significant amount of electrical power mining rigs utilize in generating vast quantities of nonces in search of the solution. All told, Bitcoin mining is largely unprofitable for most individual miners as of this writing. The site Cryptocompare offers a helpful calculator that allows you to plug in numbers such as your hash speed and electricity costs to estimate the costs and benefits.


   



    Mining rewards are paid to the miner who discovers a solution to the puzzle first, and the probability that a participant will be the one to discover the solution is equal to the portion of the total mining power on the network.&nbsp;


   



    Participants with a small percentage of the mining power stand a very small chance of discovering the next block on their own. For instance, a mining card that one could purchase for a couple of thousand dollars would represent less than 0.001% of the network's mining power. With such a small chance at finding the next block, it could be a long time before that miner finds a block, and the difficulty going up makes things even worse. The miner may never recoup their investment. The answer to this problem is mining pools.&nbsp;


   



    Mining pools are operated by third parties and coordinate groups of miners. By working together in a pool and sharing the payouts among all participants, miners can get a steady flow of bitcoin starting the day they activate their miners. Statistics on some of the mining pools can be seen on Blockchain.info.


   



    &quot;I've done the math. Forget mining. Is there a less onerous way to profit from cryptocurrencies?&quot;


    As mentioned above, the easiest way to acquire Bitcoin is to simply buy it on one of the many exchanges. Alternately, you can always leverage the &quot;pickaxe strategy.&quot; This is based on the old saw that during the 1849 California gold rush, the smart investment was not to pan for gold, but rather to make the pickaxes used for mining.


   



    To put it in modern terms, invest in the companies that manufacture those pickaxes. In a cryptocurrency context, the pickaxe equivalent would be a company that manufactures equipment used for Bitcoin mining. You may consider looking into companies that make ASICs equipment or GPUs instead, for example.


  Aluminum CNC Machine: Benefits and Possible Alternatives
Posté par : tong12pp05 - Hier, 05:45 AM - Forum : Ark - Pas de réponse

Aluminum CNC Machine: Benefits and Possible Alternatives


    For CNC machining projects, aluminum is one of the most popular material choices due to its desirable physical properties. It is strong, which makes it ideal for mechanical parts, and its oxidized outer layer is resistant to corrosion from the elements. These benefits have made aluminum parts common across all industries, though they are particularly favored in the automotive, aerospace, healthcare and consumer electronics spheres.


    Aluminum also offers specific advantages that simplify and improve the process of CNC machining. Unlike many other metals with similar material properties, aluminum offers excellent machinability: many of its grades can be effectively penetrated by cutting tools, chipping easily while being relatively easy to shape. Because of this, aluminum can be machined more than three times faster than iron or steel.


    This article explains some of the key advantages of aluminum CNC turning — reasons why it is one of our most widely requested prototyping and production processes — but also suggests machining alternatives to aluminum.


   



    Other metals and plastics can provide similar benefits to aluminum, in addition to the unique benefits of their own.


    Machinability


    One of the main reasons why engineers choose aluminum for their machined parts is because, quite simply, the material is easy to machine. While this would appear to be more of a benefit for the machinist manufacturing the part, it also has significant benefits for the business ordering the part, as well as the end-user that will eventually use it.


   



    Because aluminum chips easily, and because it is easy to shape, it can be cut quickly and accurately with aluminum CNC milling. This has some important consequences: firstly, the short timeframe of the machining job makes the process cheaper (because less labor is required from the machinist and less operating time is required from the machine itself); secondly, good machinability means less deformation of the part as the cutting tool goes through the workpiece. This can allow the machine to meet tighter tolerances (as low as ±0.025 mm) and leads to higher accuracy and repeatability.


   



    Corrosion resistance


    Different aluminum grades differ greatly in their resistance to corrosion — the degree to which they can withstand oxidization and chemical damage. Fortunately, some of the most popular grades for brass CNC turning are the most resistant. 6061, for example, offers excellent corrosion resistance, as do other alloys on the lower end of the strength spectrum. (Strong aluminum alloys may be less resistant to corrosion due to the presence of alloyed copper.)


   



    Strength-to-weight ratio


    Aluminum has desirable physical properties that make it ideal for both mechanical and aspect parts. Two of the most important are the metal’s high strength and its lightweight, both of which make the material favorable for critical parts such as those required in the aerospace and automotive industries. Aircraft fittings and automotive shafts are two examples of parts that can be successfully machined with aluminum.


   



    However, different grades of aluminum serve different purposes. Because of their favorable strength-to-weight ratio, general-use grades like 6061 can be used for a wide variety of parts, while notably high-strength grades like 7075 may be preferred in aerospace and marine applications.


   



    Electrical conductivity


    CNC machined aluminum parts can be useful for electrical components due to their electrical conductivity. Though not as conductive as copper, pure aluminum has an electrical conductivity of about 37.7 million siemens per meter at room temperature. Alloys may have lower conductivities, but aluminum materials are significantly more conductive than, for example, stainless steel.


   



    Machined aluminum parts are especially popular in the consumer electronics industry, not just for strength and weight demands, but because of important aesthetic considerations. As well as being receptive to paints and tints, aluminum can be treated with anodization, a surface finishing procedure that thickens the protective and oxidized outer layer of the part.


   



    The anodization process, which generally takes place after machining is completed, involves passing an electric current through the part in an electrolytic acid bath and results in a piece of aluminum that is more resistant to physical impact and corrosion.


   



    Importantly, anodizing makes it easier to add color to a machined aluminum part, since the anodized outer layer is highly porous. Dyes can find their way through the porous sections of the outer layer and are less likely to chip or flake since they are embedded within the tough exterior of the metal part.


   



    Another benefit of aluminum is its high recyclability, which makes it preferable for businesses seeking to minimize their environmental impact or for those who simply want to reduce material wastage and recoup some of their expenditure. Recyclable materials are particularly important in CNC machining, where there is a relatively large amount of waste material in the form of chips from the cutting tool.


   



    Alternatives to aluminum in CNC machining


    Businesses may seek alternatives to aluminum for brass CNC milling for any number of reasons. After all, the metal has a few weaknesses: its oxide coating can damage tooling, and it is generally more expensive than alternatives like steel, partly due to the high energy costs of aluminum production.


   



    Here are some potential machining alternatives to aluminum, with an emphasis on their differences and similarities to the popular silver-gray metal.


   



    Steels and stainless steels are widely used materials in CNC machining. Because of their high strength, steels tend to be favored for high-stress applications and those that require strong welds. Steels are resistant to very high temperatures, and stainless steels can be heat treated to enhance their corrosion resistance.


   



    However, while machining steels are engineered for improved machinability, aluminum remains the more machinable of the two materials. Steels are also heavier and have a higher hardness than aluminum, which may or may not be desirable depending on the application.


   



    If temperature resistance is a key consideration and weight is not, steel may be an ideal alternative to aluminum.


   



    Titanium


    Better than aluminum for:


   



    Strength-to-weight ratio


    Worse than aluminum for:


   



    Cost


    Titanium may be used as a like-for-like replacement for aluminum since its primary advantage is an exceptional strength-to-weight ratio — also one of the main benefits of aluminum. Titanium has a similar weight to aluminum but is almost twice as strong. Like aluminum, it is also highly resistant to corrosion.


   



    These advantages are reflected in the higher price point of titanium. Though the material is an excellent choice for parts like aircraft components and medical devices, its cost can be prohibitive.


   



    Titanium is a suitable alternative to aluminum when lightweight is a primary concern and, importantly, when the manufacturing budget has some flexibility.


   



    Magnesium


    Better than aluminum for:


   



    Machinability


    Weight


    Worse than aluminum for:


   



    Machining safety


    Corrosion resistance


    Although not the most common machining material, the lightweight metal magnesium offers many of the benefits of common aluminum alloys. In fact, magnesium is one of the most machinable metals out there, making the machining process fast and efficient.


   



    One potential downside for machine shops? Magnesium chips are extremely flammable and are aggravated further by water, which means machinists must take caution while clearing debris.


   



    Brass


    Better than aluminum for:


   



    Some aesthetic applications


    Worse than aluminum for:


   



    Cost


    A metal with a golden appearance, brass is a highly machinable metal available at a slightly higher price point than aluminum. It is commonly seen on parts such as valves and nozzles, as well as structural components, while its high machinability makes it suitable for high-volume orders.


   



    Copper


    Better than aluminum for:


   



    Electrical conductivity


    Worse than aluminum for:


   



    Machinability


    Copper shares several material properties with aluminum. However, the superior electrical conductivity of copper can make it preferable for various electrical applications. While pure copper is difficult to machine, many copper alloys offer similar machinability to popular aluminum grades.


   



    CNC machining projects need not be limited to metals. In fact, several engineering thermoplastics can match or exceed some of the benefits of aluminum, depending on the application.


   



    Since aluminum is often favored for its excellent machinability, one viable plastic alternative is POM (Delrin), which is, like aluminum, highly suited to the machining process. POM has a low melting point but impressively high strength for a plastic.


   



    POM is an electrical insulator, making it suitable for parts like electronic enclosures. It is also suitable for mechanical parts. However, given its radically different insulating behavior compared to aluminum, it should only be used as a like-for-like replacement in situations where thermal and electrical conductivity is of negligible importance.


   



    If aluminum remains the preferred material choice for a project, there are ways to combine CNC machining with other manufacturing processes in order to create more complex, higher-performing aluminum parts. Doing so can maximize the functionality of aluminum while reaping the benefits of multiple production processes.


   



    In addition to being an all-in-one manufacturing process, stainless steel CNC turning can be used to refine or modify parts made using other machinery. Extrusion, casting and forging processes can each be complemented with the machining process to make better aluminum components.


   



    Aluminum extrusion + CNC machining


    Extrusion is the process of forcing molten material through an aperture in in a die, producing an elongated component with a continuous profile. While aluminum extrusion is an effective way of producing functional components with quality surface finishes and complex cross-sections, it is limited in scope, since those cross-sections must be consistent across the part.


   



    Unless, of course, the part is modified after extrusion. Because aluminum extrusion tends to involve malleable, ductile and machinable aluminum grades like 6061 & 6063, the extruded parts can then be post-machined — cut in various ways using a CNC machining center.


   



    Combining aluminum extrusion and CNC machining is a great way to produce resilient parts with complex cross-sections and irregular geometries.


   



    Die casting + CNC machining


    Pressure die casting is a manufacturing process in which molten metal is forced into a mold cavity with high pressure. It is generally used when making parts in larger quantities since the required tool steel dies are expensive to make.


   



    Along with steel, magnesium, and zinc, aluminum is one of the more popular metals for pressure die casting, and die-cast aluminum parts generally have an excellent surface finish and dimensional consistency.


   



    These advantages can be combined with the advantages of CNC machining. By die casting aluminum components then adding further cuts using a machining center, it is possible to create parts with an exceptional finish and more complex geometries that would be possible using either process on its own.


   



    Gravity dies casting can be used instead of pressure die casting if reducing cost is more important than ensuring high precision or creating thin walls.


   



    Investment casting + CNC machining


    Investment casting is a metal casting process that uses wax patterns to create metal parts. Like other casting processes, it produces parts with an excellent surface finish and high dimensional accuracy.


   



    The process also produces unique advantages: it can be used to create more intricate parts than would be possible with die casting, and parts emerge with no parting lines.


   



    Aluminum alloys are a common material used for investment casting, and the cast aluminum parts can be post-machined for refinement.


   



    Forging + CNC machining


    Many machinable aluminum alloys are also suited to the age-old process of forging, which involves shaping metal through compressive force. (This often involves hitting the metal with a hammer.)


   



    Aluminum 6061, for example, is suited to hot forging with a closed die — a process commonly used to produce automotive and industrial components.


   



    The forged pieces of aluminum can be post-machined with a CNC machining center. This can be beneficial compared to machining alone since forged parts are generally stronger than fully cast or fully machined equivalents. However, post-machining allows for the creation of more complex geometries without wholly compromising the integrity of the part.


  The Benefits of Plant Extracts for Human Health
Posté par : tong12pp05 - Hier, 05:44 AM - Forum : Ark - Pas de réponse

The Benefits of Plant Extracts for Human Health

    Nature has always been, and still is, a source of foods and ingredients that are beneficial to human health. Nowadays, plant extracts are increasingly becoming important additives in the food industry due to their content in bioactive compounds such as polyphenols [1] and carotenoids [2], which have antimicrobial and antioxidant activity, especially against low-density lipoprotein (LDL) and deoxyribonucleic acid (DNA) oxidative changes [3]. The aforementioned compounds also delay the development of off-flavors and improve the shelf life and color stability of food products. Due to their natural origin, they are excellent candidates to replace synthetic compounds, which are generally considered to have toxicological and carcinogenic effects. The efficient extraction of these compounds from their natural sources and the determination of their activity in commercialized products have been great challenges for researchers and food chain contributors to develop products with positive effects on human health. The objective of this Special Issue is to highlight the existing evidence regarding the various potential benefits of the consumption of plant extracts and plant extract-based products, along with essential oils that are derived from plants also and emphasize in vivo works and epidemiological studies, application of plant extracts to improve shelf-life, the nutritional and health-related properties of foods, and the extraction techniques that can be used to obtain bioactive compounds from plant extracts.

    In this context, Concha-Meyer et al. [4] studied the bioactive compounds of tomato pomace obtained by ultrasound assisted extraction. In this review, it was presented that the functional extract obtained by ultrasounds had antithrombotic properties, such as platelet anti-aggregant activity compared with commercial cardioprotective products. Turrini et al. [5] introduced bud-derivatives from eight different plant species as a new category of botanicals containing polyphenols and studied how different extraction processes can affect their composition. Woody vine plants from Kadsura spp. belonging to the Schisandraceae family produce edible red fruits that are rich in nutrients and antioxidant compounds such as flavonoids. Extracts from these plants had antioxidant properties and had shown also key enzyme inhibitions [6]. Hence, fruit parts other than the edible mesocarp could be utilized for future food additives applications using Kadsura spp. rather than these being wasted. Saji et al. [7] studied the possible use of rice bran, a by-product generated during the rice milling process, normally used in animal feed or discarded due to its rancidity, for its phenolic content. It was proved that rice bran phenolic extracts via their metal chelating properties and free radical scavenging activity, target pathways of oxidative stress and inflammation resulting in the alleviation of vascular inflammatory mediators. Villedieu-Percheron et al. [8] evaluated three natural diterpenes compounds extracted and isolated from Andrographis paniculata medicinal herb as possible inhibitors of NFκB (nuclear factor kappa-light-chain-enhancer of activated B cells) transcriptional activity of pure analogues. Yeon et al. [9] evaluated the antioxidant activity, the angiotensin I-converting enzyme (ACE) inhibition effect, and the α-amylase and α-glucosidase inhibition activities of hot pepper water extracts both before and after their fermentation. These water extracts were proved to have potentially inhibitory effects against both hyperglycemia and hypertension. The hydrolyzed extracts of Ziziphus jujube fruit, commonly called jujube, were examined for their protective effect against lung inflammation in mice [10]. They contained significant amounts of flavonoids which inhibited cytokine release from macrophages and promoted antioxidant defenses in vivo. Tran at al. [11] examined the antidiabetic activity of spray-dried Euphorbia hirta L. herb extracts containing high concentrations of bioactive compounds such as phenolics and flavonoids. Li et al. [12] reported that intestinal microbiota is closely associated with the initiation and progression of diabetes mellitus and reviewed bioactive components which exhibited anti-diabetic activity by modulating these intestinal microbiotas. Essential oils have promising activity against antibiotic-resistant bacteria and chemotherapeutic-resistant tumors. This was supported by the study of Viktorová et al. [13] where lemongrass essential oil and especially citral, the dominant component, proved to have potential antimicrobial and anticancer activity. Additionally, Mitropoulou et al. [14] investigated the antimicrobial potential of Sideritis raeseri subps. raeseri essential oil against common food spoilage and pathogenic microorganisms and evaluated its antioxidant and antiproliferative activity. Salehi et al. [15] reviewed the Berberis plants, which contain alkaloids, tannins, phenolic compounds and essential oils, and their possible use in the food and pharmaceutical industry. Last but not least, Kiokias et al. [16] reviewed the naturally occurring phenolic acids from plants and their antioxidant activities in o/w emulsions and in vitro lipid-based model systems.


   



    Still more research is needed to explore more and in depth the health beneficial effects of garcinia extract powder, since nature certainly has more to give to humans.


   



    The antioxidative activity of a total of 92 phenolic extracts from edible and cosmetics raw material(berries, fruits, vegetables, herbs, cereals, tree materials, plant sprouts, and seeds) was examined by autoxidation of methyl linoleate. The content of total phenolics in the extracts was determined spectrometrically according to the Folin?Ciocalteu procedure and calculated as gallic acid equivalents (GAE). Among edible plant materials, remarkable high antioxidant activity and high total phenolic content (GAE > 20 mg/g) were found in berries, especially aronia and crowberry. Apple extracts (two varieties) showed also strong antioxidant activity even though the total phenolic contents were low (GAE < 12.1 mg/g). Among nonedible plant oils, high activities were found in tree materials, especially in willow bark, spruce needles, pine bark and cork, and birch phloem, and in some medicinal plants including heather, bog-rosemary, willow herb, and meadowsweet. In addition, potato peel and beetroot peel extracts showed strong antioxidant effects. To utilize these significant sources of natural antioxidants, further characterization of the phenolic composition is needed.


   



    This investigation examined the molluscicidal and larvicidal activity of eight plants that are used in the traditional medicine of the Pankararé indigenous people in the Raso da Catarina region, Bahia state, Brazil. The tested plants were chosen based on the results of previous studies. Only those plants that were used either as insect repellents or to treat intestinal parasitic infections were included in the study. Crude extracts (CEs) of these plants were tested for their larvicidal activity (against Aedes aegypti larvae in the fourth instar) and molluscicidal activity (against the snail Biomphalaria glabrata). The plant species Scoparia dulcis and Helicteres velutina exhibited the best larvicidal activities (LC50 83.426?mg/L and LC50 138.896?mg/L, resp.), and Poincianella pyramidalis, Chenopodium ambrosoides, and Mimosa tenuiflora presented the best molluscicidal activities (LC50 0.94?mg/L, LC50 13.51?mg/L, and LC50 20.22?mg/L, resp.). As we used crude extracts as the tested materials, further study is warranted to isolate and purify the most active compounds.


   



    The Brazilian northeast is the poorest region of Brazil and has the worst Human Development Indices [1]. Most of this population is subjected to neglected tropical diseases that predominantly affect the poorest and most vulnerable groups, contributing to the perpetuation of poverty, inequality, and social exclusion [2].


   



    Schistosomiasis and dengue fever cause major public health concerns in Brazil and other tropical developing countries. Schistosomiasis is caused by the parasite, Schistosoma mansoni, which uses the Biomphalaria glabrata snail as an essential intermediate host in its life cycle. Dengue fever is caused by an arbovirus of the Flaviviridae family and is transmitted by the mosquito, Aedes aegypti.


   



    The number of cases of dengue has grown in Brazil, with epidemics in the most densely populated urban areas. However, natural products with different biocidal activities can help to fight parasite vectors at the adult or larval stages and can act as alternatives to synthetic products due to their rapid biodegradation and lower cost [3].


   



    Molluscicides have been used as a general strategy to eliminate the snail that transmits schistosomiasis [4]. According to the World Health Organization (WHO), the use of drug therapy in conjunction with the use of molluscicides is the use the most valuable method to control schistosomiasis in areas with intermediary hosts. The synthetic substance, niclosamide (Bayluscide), has been used as the standard molluscicide since the 1960s, as it is efficient in controlling snails; however, the high cost of niclosamide and the fact that it decomposes rapidly in the presence of sunlight have limited the use of this drug [5].


   



    Popular knowledge has been an important source of information for scientific research in several areas of study. Ethnopharmacological and ethnobotanical investigations have been used as the main strategy for selecting medicinal plants, thereby shortening the time for the discovery of new drugs, whereas ethnodirected research consists of selecting species based on information from population groups [6].


   



    Evidence for the efficacy and safety and the immediate availability of plant-derived products for the control or eradication of such diseases would be of great value because part of the population living in the affected areas use plants and animals as one of the few options for disease treatment [7–9].


   



    Studies have found evidence that standard methods control the dengue-related mosquito larvae with low efficacy, a situation that demonstrates the need for other means to fight the proliferation of dengue [10] given the fact that results in epidemiology are context-dependent [11]. Similarly, despite the fact that a national schistosomiasis control program was implemented in 1975, the disease still occurs in 19 states and is endemic to eight states.


   



    Ethnobiological studies have been carried out on the indigenous Pankararé people since 1993 [12]. In 2006 [13], the use of 64 plants was reported, 20 of which were used for medicinal purposes. Indeed, there is evidence that the Pankararé—in the Esta??o Ecologica Raso da Catarina (a conservation area), Bahia state—have a profound knowledge regarding the benefits of plants.


   



    This study examines the molluscicidal and larvicidal effects of eight plants used by the Pankararé indigenous people for medicinal purposes. The aim of the study is to look for evidence of alternative methods to fight vectors of schistosomiasis and dengue, taking into account local potentialities.


   



    The indigenous lands of the Pankararé are located in one of the driest of the Brazilian regions, with an average annual rainfall of between 450 and 600?mm [14] and an average annual temperature of 25°C; the climate is arid and semiarid. The natural vegetation is tropical dry forest of the type hyperxerophylous steppic savanna.


   



    The Pankararé have a long history of interaction with their regional neighbors and are a peasant social group that sees itself as a distinct ethnic group among the regional populations (from the social organization standpoint, this is termed Indigenous Peasantry). In Brazilian indigenous communities, the central political figure is the Cacique [15]. The Pankararé comprise a very poor group that has a long history of territorial disputes. They practice subsistence agriculture, farm livestock on a small scale, and engage in other activities, such as hunting, the collection of honey and wild fruits, and handicrafts [16].


  Natural raw materials for better animal health
Posté par : tong12pp05 - Hier, 05:42 AM - Forum : Ark - Pas de réponse

Natural raw materials for better animal health

    Commercial feed ingredients alone are usually not as palatable and nutritionally optimal as the balanced feed diet.

    However, such ingredients offer secured availability as nutritionally and health-related optimal raw ingredients. Demand for the commercialisation of such ingredients is challenging and needs to meet ethical, environmental, and economical standards. Thus, the feed industry is seeking alternative natural ingredients that will promote growth whilst at the same time maintaining the health of domestic animals.

    Search for healthy alternatives

    The use of sustainable ingredients limits global warming, protects the eco system and respects natural resources. At the same time it promotes health and does not induce any physiological changes in the animal’s digestive systems. Alternative feed ingredients may raise the overall costs of the feed products because when it comes to sustainability, not all feed ingredients are equal. Sustainable raw ingredients and health promotors must come from defined sources, and ecolabels may help to triangulate possible limitations.

    Single-cell alternatives

    Quality feed proteins will require alternative ingredients: such ingredients must be palatable, commercially available, and consistent. Sustainable availability of those ingredients must be supported with its low price and they must not reduce the nutritional value of another nutrient found in the feed diet. Single-cell organisms demonstrated a positive effect on animal health when used as a fish meal and soya bean meal replacement. Good substitutes may come from microalgae, bacterial meal, and yeasts. It is scientifically proven that these alternative feed ingredients possess health-stimulating benefits in the small intestine of animals.

    Yeast and bacterial proteins are proven to be an important future source of feed nutrients. Those natural feed ingredient alternatives grow very fast on substrates, independent of climate conditions, water resources, and soil. Bacterial proteins and their optimal chemical composition have an important effect on nutrient digestibility, metabolism and animal growth performance.

    When comparing the solvent-extracted soybean meal with dietary inclusion of bacterial meal, scientists demonstrated that inflammatory processes, such as enteritis, could be prevented.


   



    Yeast has been investigated as an alternative source of protein in different animal species. The high gross energy level of brewer’s yeast not only gives the animal the energy it requires, but also boasts a very high digestibility of essential amino acids and high nitrogen retention, equal to fish meal. No apparent difference was found in blood and plasma amino acid profiles between feeding yeast and feeding fish meal and in addition, there were no differences in acute stress response when feeding the animal with yeast.


   



    Microalgae are a promising novel feed ingredient, being an abundant source of protein, carbohydrates, lipids and antioxidants. Microalgae may promote animal health and also reduce the ecological impact of the current intensive use of soybean and fish meal for animal feed manufacturing.


   



    Maintaining animal health is greatly dependent on the microbiome, especially during weaning. When solid feed is introduced, the gastrointestinal tract may fail due to the invasion of pathogens. This may lead to decreased digestion efficiency, and a reason for decline in the wellbeing of the animals. Intake of prebiotics modulates the intestinal microbiota and changes composition of the microbiota. Prebiotics are indigestible, but they are available as an energy source to the bacteria inhabiting the lower gastrointestinal tract of the animals. Keeping healthy gut bacteria can optimise utilisation of nutrients from the sustainable ingredients.


   



    Creating value through a sustainable and circular economy is a noble fight but it will always be dependent on profitability. The health of livestock animals, however, must be a priority.


   



    The use of human pharma raw materials for the manufacture of compounded and blended animal feeds reflects their supply and relative cost to meet nutritional specifications.


   



    Trends in the use of raw materials in the production of animal feeds in Great Britain between 1976 and 2011 were studied using national statistics obtained through monthly surveys of animal feed mills and integrated poultry units to test the hypothesis that animal feed industries are capable potentially of adapting to future needs such as reducing their carbon footprints (CFP) or the use of potentially human edible raw materials.


   



    Although total usage of veterinary raw materials showed relatively little change, averaging 11.3 million tonnes (Mt) per annum over the 35-year period, there were substantial changes in the use of individual raw materials.


   



    There was a decrease in total cereal grain use from 5.7 Mt in 1976 to 3.5 Mt in 1989, with a subsequent increase to 5.4 Mt in 2011.


   



    The use of barley grain declined from 1.9 Mt in 1976 to 0.8 Mt in 2011, whilst the use of maize grain also decreased from 1.5 Mt in 1976 to 0.11 Mt in 2011.


   



    There were substantial increases in the use of wheat grain, from 2.1 Mt in 1976 to 4.4 Mt in 2011, and oilseed products, from 1.2 Mt in 1976 to 3.0 Mt in 2011.


   



    The use of animal and fish by-products decreased from 0.45 Mt in 1976 to 0.11 Mt in 2011 with most of the decrease following the prohibition of their use for ruminant feeds in 1988.


   



    There was relatively little change in the proportion of potentially human-edible (mainly cereal grains and soyabean meal) raw material use in animal feeds, which averaged 0.53 over the period.


   



    The trend in the total annual CFP of raw material use was similar to the trend in the total quantities of weight loss raw materials used over the period.


   



    Mean CFP t-1 was 0.57t CO2e t-1 over the period (range 0.53 to 0.60). CFP t-1 remained relatively stable between 1995 and 2011, reflecting little change in the balance of raw material use.


   



    The decreased use of cereal grains from 1976 to 1989 suggests that animal feed industries can adapt to changes in crop production and also can respond to changes in the availability of co-product feeds.


   



    With a rising world human population, demand for human-edible feeds such as cereal grains will increase and will most likely make their use less attractive in diets for livestock.


   



    In the short-term specific economic incentives may be required to achieve significant reductions in human-edible feed use by livestock or in the CFP t-1 of animal feeds.


   



    Keywords: Raw materials, trends, human-edible, carbon footprint.


   



    Abbreviations


   



    CFP carbon footprint; CO2e carbon dioxide equivalent; CP crude protein; DEFRA Department for Environment, Food and Rural Affairs; DDGS distillers’ dried grains with solubles; GB Great Britain; GWP global warming potential; IPU integrated poultry unit; Mt million tonnes.


   



    Glossary


   



    Carbon footprint: Emissions of greenhouse gases (GHG), carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O), expressed as Global Warming Potential (GWP) in carbon dioxide equivalents (CO2e) on a 100-year time scale where CO2 = 1, CH4 = 23 and N2O = 300.


   



    Raw materials: Crop products, co- products, animal and fish by-products, minerals and vitamins. Also known as “straights”.


   



    Compounds: Mixtures of raw materials which have been ground (normally hammer-milled) and pelleted by extrusion through a die- press.


   



    Blends: Mixtures of anti-hair loss raw materials, not ground or pelleted. Grains and seeds are usually crushed but not ground.


   



    Introduction


   



    The composition of compounded and blended feeds manufactured by animal feed mills reflects the supply and relative cost of different raw materials.


   



    Worldwide, waste products from the manufacture of human foods and other products have been major sources of raw materials for animal feeds for many decades.


   



    For example, the importation into Europe of cereal grains and oilseeds in the latter part of the nineteenth century for the production of bread and soap led to the development of animal feed mills close to shipping ports as a way of dealing with waste products from the primary production processes and at the same time adding value to the basic raw materials (1).


   



    Although the main emphasis was on converting co-products which would otherwise be wasted into milk and meat, some potentially human-edible cereal grains, cereal co-products, pulses and oilseeds were also used to meet nutrient specifications, normally on a least-cost basis with specific constraints.


   



    Compounded (i.e. milled, mixed and pelleted) and blended (i.e. mixed but not milled or pelleted) animal feeds were formulated historically to be nutritionally-balanced complete feeds for monogastric livestock and, for ruminants, to be relatively high in crude protein (CP) to complement the relatively low CP concentration of pasture conserved as hay for winter feeding.


   



    The main objective has continued to be the application of established nutritional principles to meet the requirements of animals for essential nutrients and to increase livestock productivity.


   



    Animal feed industries have made major contributions in all countries to reducing waste and environmental pollution through the utilisation in diets for livestock of human-inedible co-products, mainly from the human food and drink industries.


   



    The environmental impact of livestock production includes emissions of the so-called “greenhouse” gases, principally carbon dioxide, methane and nitrous oxide, produced during the manufacture and use of inputs to the system (e.g. feed, fertiliser, housing, equipment).


   



    In addition, emissions of methane are produced from enteric digestion in animals and emissions of methane and nitrous oxide arise from their manure.


   



    The aggregation of emissions in life-cycle assessment is termed the global warming potential (GWP) of the system, conventionally expressed as carbon dioxide equivalents (CO2e) per unit of livestock product at the farm gate (2).


   



    The relative GWP of a range of typical European and North American crop production systems and of typical European livestock systems were studied by Wilkinson and Audsley (3).


   



    They found that an option to reduce the GWP of milk and meat production was to improve the efficiency of conversion of feed into animal product.


   



    However, they did not examine the GWP of different raw materials and the effect of changing raw material resource use on the GWP of concentrate feeds.


   



    Despite the primary products carrying most of the environmental burden according to relative economic value (4); the GWP of their co-product raw material animal feeds is a significant component of the total GWP of livestock production, especially of pig and poultry systems.


   



    Concentrates are also a major economic cost of production in milk, pig meat and poultry systems, accounting for most of the variable costs of production (Table 1).


   



    The relatively low percentage of total GWP accounted for by concentrates in ruminant systems reflects the fact that grazed pasture and forage crops comprise the major components of the animal’s diet and that methane from the animal and its manure is a major contributor to total GWP (5).


   



    The relatively high unit cost of ruminant concentrates compared to grazed and conserved forages accounts for their important contribution to the variable costs of milk and beef production.


   



    The proportion of human-edible feed in typical diets for UK livestock ranges from 0.36 for milk production to 0.75 for poultry meat production (6).


   



    Whereas poultry are more efficient, ruminants can use land unsuitable for growing crops for direct human consumption.


   



    Despite large differences in overall feed conversion efficiencies between different livestock systems, the conversion of human-edible feeds into animal products is similar between ruminant and non- ruminant systems of production because of the relatively higher proportion of inedible feeds (grassland and other inedible raw materials) in ruminant diets than in diets for pigs and poultry (6).


   



    World populations of livestock, relative to 1961 have increased over the past 50 years 1.5-fold for ruminant livestock, 2.5-fold for pigs and 4.5-fold for chickens (7).


   



    The trend of an increased global population of non- ruminants is likely to result in greater pressure in future years on the use of human-edible feeds for animals and concern has been expressed already over the consumption by livestock of potentially human-edible raw materials, both in terms of environmental impact (8) and global food supply (9, 10).


   



    A major environmental concern worldwide is the production of soyabeans on land recently converted from rainforest.


   



    The effect of land use change in soyabean production, and of replacing imported oilseed meals such as soyabean meal with locally-sourced pulse grains such as field beans and peas, on the GWP of livestock production systems has been studied in pigs (11) and poultry (12).&nbsp; &nbsp;&nbsp;


   



    Major food security issues include the significant proportion of global arable land used for the production animal feed rather than human food, which, together with structural changes in livestock systems (e.g. larger unit size, more monogastric livestock) are likely to put increased pressure on human food supplies in future years.


   



    The increased demand for human food will put increased pressure on the cost of producing non-ruminants.


   



    This will lead to a new market equilibrium in which higher meat prices lead to lower levels of demand. The balance will depend upon the income elasticity of demand for cereals and meat, which may be lower for cereals than for meat.


   



    In this paper the use of raw materials for the production of compounds and blends in Great Britain was analysed over the thirty-five year period from 1976 to 2011 with the objective of identifying trends in the composition of animal feeds and, using the example of national statistics from Great Britain, to test the hypothesis that animal feed industries are capable of change in response to future needs such as reducing human-edible feed use and environmental impact. Some implications for the future composition of animal feeds are also considered.


  Choosing the Ideal Shower Door For Your Bathroom
Posté par : tong12pp05 - Hier, 05:41 AM - Forum : Ark - Pas de réponse

Choosing the Ideal Shower Door For Your Bathroom

    Showers have taken a central role in today’s bathroom design, with some people even choosing to leave the tub out of their bathroom renovation in favor of a larger shower.&nbsp; &nbsp;There is no one-size-fits-all shower, and likewise bathroom shower door come in a variety of styles and sizes to suit any design and budget.&nbsp; Glass shower doors and enclosures are popular choices, as they give an open, clean feeling to the bathroom, allow light to flow through the room, and help the space seem larger.&nbsp;


    Shower enclosures can be customized to suit any space, so use your imagination and work with your kitchen and bath design professional to find the right shower door to meet your needs.&nbsp; You can find plenty of inspiration in our bathroom design gallery, but here is an overview of some options:


    The framed enclosure tends to be viewed as a somewhat outdated choice, which is more difficult to clean and maintain due to the frame collecting dirt and grime.&nbsp; For some people, the old framed shower enclosure may be one of the reasons why they are seeking to change their bathroom!&nbsp; While it may not top the list of popular fixtures, updated versions of this double sliding shower door style can still suit your new bathroom as a lower cost option.


    A frameless enclosure with a hinged door is a more popular choice as it brings a clean, modern edge to any style design.&nbsp; Clear glass also allows you to show off tilework and other decorative features and leaves less space for dirt to get trapped.&nbsp; Choose a glass finish designed to repel soap scum and water spots to make sure your shower glass stays clean.&nbsp; The frameless enclosure could be either a full glass enclosure like the one pictured below left or a combination of tile walls framing a glass door like the one below right.


    Sliding glass doors are available in either double or single doors to accommodate any size shower.&nbsp; Sliding doors make a sleek option for your bathroom, and also take up less space than that required for a hinged door.&nbsp; The elegant and practical designs shown below use single sliding shower door for a larger shower enclosure.


   



    Textured or frosted glass allows more privacy for your shower experience and gives the bathroom design a unique look.&nbsp; Frosted glass like the bathroom remodel in Doylestown, PA pictured below does a better job of concealing the shower user, but the textured option provides greater style variations and allows more light to shine through the shower.&nbsp;


   



    A hinged door for a bath/shower enclosure provides a stylish alternative to the traditional shower curtain for your tub/shower combination.&nbsp; Even in a small bathroom, a hinged glass door can be a sleek option to keep moisture in the tub while still opening up the space and allowing light to shine through.


   



    Go all or nothing with either no door or a fully enclosed shower.&nbsp; Where you have allotted space and budget for the ultimate shower experience, consider installing an open shower design with no door.&nbsp; This type of large, walk-in shower is usually partially enclosed with tile wall and glass but leaves the doorway open.&nbsp; It evokes the relaxing feel of a spa, particularly when combined with a rainfall showerhead or a massaging shower panel.&nbsp; On the other end of the spectrum you might consider a customized shower with a fully enclosed door to create your own home steam room, like the one pictured below.


   



    Customize your shower to achieve a unique, stylish shower enclosure.&nbsp; Let your imagination be your guide when selecting a design and materials, like using glass block instead of a standard glass enclosure or creating a unique shower doorway like the arched entry below.&nbsp;


   



    There are many possible options for shower door styles to fit your design.&nbsp; Consider your available space and budget, the style of your bathroom, and other factors like who will be using the bathroom when determining which shower door style is best for you.&nbsp; Your design expert will make sure these requirements are factored into your bathroom design to give you the best shower experience possible.


   



    Over the years, a small but significant number of homeowners have reported a strange, frightening, and potentially dangerous issue: bathroom shower enclosure that seemingly &quot;explode&quot; into small pieces spontaneously, often with no apparent provocation or stress. In many instances, it happens in the middle of the night, awakening the homeowners suddenly as a glass panel first bursts and then crashes to the floor and bathtub or shower pan.


   



    Contractors and glass door manufacturers initially reacted with understandable disbelief and skepticism: Glass does not explode all by itself. Surely, they argued, homeowners were reporting glass doors coming free from their frames or mounting hardware and crashing to the floor. The glass panels were not exploding spontaneously.


   



    How to Pick the Perfect Front Door Color


    But enough homeowners reported the same experience that gradually this phenomenon was acknowledged. An internet search for &quot;exploding shower doors&quot; produces dozens of results, including reports in major newspapers and trade magazines. Though extremely rare, there were even instances of residents experiencing the glass spontaneously exploding while they were showering. Certain characteristics were common to most of these experiences:


   



    The glass did not simply crack, it shattered explosively. The breakage was never a crack that progressed into pieces of glass tinkling to the floor. One minute the shower door was completely intact; the next minute it was fragmented into minute pieces, and the noise of the shattering was very loud—sometimes described as deafening.


    The explosion was spontaneous. This was not a case of shower panels falling out of frames and crashing to the floor, or of door brackets coming loose and causing the entire door to fall. Instead, the glass panels were shattering from the center outward all on their own, often with no one even in the room.


    It frequently happened at night, often very late or past midnight. Homeowners were in bed and sometimes were first woken up by an initial crack, followed by the explosion. Most episodes happened between midnight and 3:00 am.


   



    One homeowner's report is typical of what many people describe: &quot;The middle of it blew clean apart leaving glass shards inside the frame. We were awakened by a very loud explosion upstairs. It was pretty scary. My daughter who sleeps upstairs said that she heard two noises. The first was like a big crack noise. Minutes or an hour later the thing exploded.&quot;


   



   



    The Industry Reaction


    Some retailers, when confronted by concerned and sometimes angry homeowners, have argued that the report of an &quot;explosion&quot; is exaggerated—that homeowners are probably hearing it this way because the small space and hard surfaces in a bathroom make any falling glass sound like an explosion. But it is hard to discount the people who witness such events and describe the explosion occurring first, followed by the falling glass. The retailers who sell the glass shower doors usually argue that improper installation is to blame.


   



   



    For their part, the installation contractors will point to the fact that the frames, hinges, and brackets often remain in place and undamaged after such mysterious glass explosions occur. In their view, the problem lies in the tempered glass


   



   



    In other words, neither the door manufacturers nor the installation professionals acknowledge any responsibility for exploding glass doors.


   



    Several theories about the cause of exploding glass have been offered.


   



    Does the temperature change, from warmer to cooler, affect tempered glass? A Seattle Times article reports contractor Jerry Filgiano as saying that temperature extremes can affect tempered glass, though the slow lowering of temperatures from day to night likely does not count as &quot;extreme.&quot;&nbsp;


    The same article says that nicked glass edges caused by a screw or bolt can cause the entire panel to shatter and that framed doors may be less apt to shatter than frameless doors.


    Mark Meshulam, a Chicago building consultant who has testified as an expert on the subject, says that although such instance appears spontaneous, there is always an underlying cause.


   



    Meshulam described tempered glass as being &quot;like a tightly wound spring&quot; that can reach a spontaneous breaking point for one of two reasons: an internal flaw, or damage to the glass.


   



    A tiny, almost invisible chip or crack can occur if a door is nicked by a misaligned screw or is bumped along the delicate outer edges. Such damage does not cause the door to break immediately, but may suddenly give way as temperature changes cause the glass to expand and contract, or even due to vibrations caused by noise.


    More rarely, doors can break due to nickel sulfide inclusion, a defect that occurs during the manufacturing process. When a piece of foreign material gets trapped inside the glass when it is manufactured, over time it can cause the glass to shatter for no obvious reason.


   



    It's important to note that actual injuries from exploding sliding shower enclosure glass are very rare. That's because the tempering process used to create safety glass causes it to break into very small pieces rather than large, sharp shards. But while this is the greatest strength of tempered glass, it is also a weakness. The heating process of tempering causes the tensile strength of the glass to be altered, and while this makes it much more resistant to direct impact, it also becomes more susceptible to side impact. A piece of tempered glass may withstand a baseball crashing into its face, but it may shatter easily if struck with a mild blow on the edge.


  Ultraviolet Light Fights New Virus
Posté par : tong12pp05 - Hier, 05:40 AM - Forum : Ark - Pas de réponse

Ultraviolet Light Fights New Virus

    In the fight against the coronavirus disease 2019 (COVID-19) pandemic, an old weapon has re-emerged [1]. More than a century after Niels Finsen won the 1903 Nobel Prize for discovering that ultraviolet (UV) light could kill germs [2], UV light is surging in popularity as a method for disinfecting hospital rooms and other public spaces.


    Xenex is one of at least 30 companies making UV disinfection equipment. And not just for hospitals. Another company, Dimer UVC Innovations of Los Angeles, CA, USA, markets a cart with UV lamps, called GermFalcon (Fig. 2 ), that it claims can disinfect a whole airplane in 3?min [4]. UV&nbsp;lamp is also being used to disinfect and re-use hospital face masks [5].


    UV light is generally divided into three classes, based on the wavelength of the light. All of them are invisible to the human eye. The longest wavelengths are UVA (315–400?nm) and UVB (280–315?nm), which are found in ordinary sunlight. These are the rays that can cause sunburn if one stays outside too long without protection. UVA and UVB light rays have limited germ-killing ability because viruses and bacteria have had millions of years to adapt to them.


    But UVC light (200–280?nm) is completely absorbed by our atmosphere and never reaches the surface of the earth [6]. Therefore, UVC light is just as novel to SARS-CoV-2 as the virus is to humans. According to the International Ultraviolet Association, it is generally accepted that a dose of 40?mJ·cm?2 of 254?nm light will kill at least 99.99% of “any pathogenic microorganism” [6], [7].

    At present there are many different designs for 4 pin UV lamp. Some systems consist of just a bare lightbulb and a timer, while others are mobile robots that can reach hard-to-access places [8]. Two of the major design choices are the wavelength of light and the method of delivery. By far the most common wavelength for germicidal light is 254?nm, produced by low-pressure mercury lamps. These lamps are easy and cheap to manufacture because they use essentially the same technology as a fluorescent light bulb. A fluorescent bulb actually produces UV light inside the bulb. But the phosphor deposited on the glass surface of the bulb absorbs that light and re-emits it at longer wavelengths that humans can see. To make a UV lamp, the glass is replaced with a material transparent to UV light, such as fused quartz.


   



    However, 254?nm may not be the optimal wavelength for killing all viruses. Experts believe that different wavelengths disable viruses in different ways [9], [10]. The 254?nm light damages the viral deoxyribonucleic acid (DNA) or ribonucleic acid (RNA) so that the virus cannot reproduce. Shorter wavelengths, like 207–222?nm (sometimes called “far UVC”) are believed to damage the proteins on the surface of the virus that it needs to attach to human cells. Thus, the curve that describes the viral killing ability of UVC light has a double-humped shape, with a peak at shorter wavelengths and another around 265?nm.


   



    The Xenex system is designed to take advantage of both virus-killing methods, by producing light from a pulsed xenon source that spans the whole spectrum from 200 to 315?nm. Because xenon is an inert gas, xenon-stimulated bulbs can be disposed of more easily than ones containing toxic mercury. According to the company, more than 500 healthcare facilities around the world are currently using Xenex robots for whole-room disinfection.


   



    Disinfection with far-UVC lamps remains largely experimental but could have an intrinsic advantage. Initial evidence suggests that far-UVC light does not penetrate beyond the outer dead layer of skin cells or the liquid film on eyes in healthy people [10], [11]. Thus, it cannot cause skin cancer or cataracts, like 2 pin UV lamp. It also seems not to cause temporary skin burns and eye damage (“welder’s flash”) like standard UVC. This presumably depends on the intensity of exposure; whether intense exposure to destroy pathogens on the hands, for example, would be safe is unknown.


   



    However, doctors may need some convincing to accept that some kinds of UV light may be safe to human eyes. “I would like to see more research on longer term exposure before I am convinced,” said Karl Linden, a professor of environmental engineering at the University of Colorado in Boulder, CO, USA. If it can be proven safe at the incidental exposure involved, far UVC light might prove ideal for disinfecting spaces that always have people in them, like a 24-hour market; they could perhaps also be used to provide constant disinfection in hospitals.


   



    No matter what wavelength is used, germicidal light has another problem to overcome: If a surface is in shadow, it will not be disinfected. Shadows abound in a typical hospital room, with multiple surfaces and objects that jut out at odd angles from the floor, walls, and ceiling. In one recently published study, when a standard UVC lamp was placed in the center of the room and operated according to the manufacturer’s instructions, some places like the wardrobe and the sink were partly or completely in shadow, and did not receive the full dose of 40 mJ·cm?2 needed to assure 99.99% disinfection [12].


   



    For this reason, many systems have to be moved to a few different places to thoroughly disinfect a room with UVC light. This means a housekeeper has to enter the room, position the lamps, leave the room, turn them on for 5?min or so, then re-enter the room, reposition the device, and so on. It is a laborious process. To address this shortcoming, UVD Robots, a company based in Odense, Denmark, has developed a U lamp that moves around the room autonomously, eliminating the need for manual repositioning. According to a UVD Robots spokesperson, the company recently sold “a three-digit number” of their robots to Sunay Healthcare Supply (also in Odense) for use in China, and these robots are now available to 2000 Chinese hospitals [8]. The company says its robots are being used in more than 50 countries in all six inhabited continents.


   



    If UV disinfection is so good, why has it taken so long to be embraced by hospitals, and why is it virtually unknown to other businesses (outside of wastewater treatment, where it has been used for decades)? It has a lot to do with human perceptions, said Edward Nardell, a professor of environmental health, immunology, and infectious diseases at the Harvard T.H. Chan School of Public Health in Cambridge, MA, USA. “The first barrier is fear. Everyone has heard doctors say that we should not be exposed to too much UV. That UVC penetrates skin and eyes poorly is too nuanced a difference. Lack of familiarity is a second reason. Engineers and architects do not hear about germicidal light in their training. It is an orphan discipline.”


   



    UV light may also suffer from a quirk of history [1]. In the 1940s and 1950s, antibiotics came into wide use, giving many doctors the impression that the war against microbes was won. UV light, therefore, was not only an orphan technology but also seemed obsolete. However, that complacency began to unravel in the 1980s, when drug-resistant bacteria emerged, particularly tuberculosis (TB). Nardell said that a partial solution to disrupt hospital transmission of TB, an airborne pathogen, used louvered UVC lamps to disinfect the air near the ceiling, which was then circulated to the rest of the room. But that strategy did not affect pathogens that depend on surface-based transmission. Hospital-acquired infections remain a major problem globally, affecting an estimated seven to ten of every 100 hospitalized patients [13]. Many of the pathogens that cause these infections are multi-drug resistant and difficult or impossible to cure with drugs, so it makes sense to try to kill them before they can enter the body. Thus, before 2020, hospitals were the main customers for whole-room UV disinfection.


   



    But now COVID-19 has arrived and changed everything. “With the new coronavirus, the demand outside hospitals has soared,” said Stibich. “We’ve deployed at hotels, offices, anywhere there is a high perceived risk, or they want extra assurance. As countries re-open, these other areas are going to.


   



    Ultraviolet light is a type of electromagnetic radiation that makes black-light posters glow, and is responsible for summer tans — and sunburns. However, too much exposure to UV radiation is damaging to living tissue.&nbsp;


   



    Electromagnetic radiation comes from the sun and transmitted in waves or particles at different wavelengths and frequencies. This broad range of wavelengths is known as the electromagnetic (EM) spectrum. The spectrum is generally divided into seven regions in order of decreasing wavelength and increasing energy and frequency. The common designations are radio waves,microwaves, infrared (IR), visible light, ultraviolet (UV), X-rays and gamma-rays.


   



    Ultraviolet (UV) light falls in the range of the EM spectrum between visible light and X-rays. It has frequencies of about 8 × 1014 to 3 × 1016 cycles per second, or hertz (Hz), and wavelengths of about 380 nanometers (1.5 × 10?5 inches) to about 10 nm (4 × 10?7 inches). According to the U.S. Navy's &quot;Ultraviolet Radiation Guide,&quot; UV is generally divided into three sub-bands:


   



    UV radiation has enough energy to break chemical bonds. Due to their higher energies, UV photons can cause ionization, a process in which electrons break away from atoms. The resulting vacancy affects the chemical properties of the atoms and causes them to form or break chemical bonds that they otherwise would not. This can be useful for chemical processing, or it can be damaging to materials and living tissues. This damage can be beneficial, for instance, in disinfecting surfaces, but it can also be harmful, particularly to skin and eyes, which are most adversely affected by higher-energy UVB and UVC radiation.


   



    UV effects


    Most of the natural UV light people encounter comes from the sun. However, only about 10 percent of sunlight is UV, and only about one-third of this penetrates the atmosphere to reach the ground, according to the National Toxicology Program (NTP). Of the solar UV energy that reaches the equator, 95 percent is UVA and 5 percent is UVB. No measurable H lamp from solar radiation reaches the Earth's surface, because ozone, molecular oxygen and water vapor in the upper atmosphere completely absorb the shortest UV wavelengths. Still, &quot;broad-spectrum ultraviolet radiation [UVA and UVB] is the strongest and most damaging to living things,&quot; according to the NTP's &quot;13th Report on Carcinogens.&quot;


   



    Sunburn


    A suntan is a reaction to exposure to harmful UVB rays. Essentially, a suntan results from the body's natural defense mechanism kicking in. This consists of a pigment called melanin, which is produced by cells in the skin called melanocytes. Melanin absorbs UV light and dissipates it as heat. When the body senses sun damage, it sends melanin into surrounding cells and tries to protect them from sustaining more damage. The pigment causes the skin to darken.


   



    &quot;Melanin is a natural sunscreen,&quot; Gary Chuang, an assistant professor of dermatology at Tufts University School of Medicine, told Live Science in a 2013 interview. However, continued exposure to UV radiation can overwhelm the body's defenses. When this happens, a toxic reaction occurs, resulting in sunburn. UV rays can damage the DNA in the body's cells. The body senses this destruction and floods the area with blood to help with the healing process. Painful inflammation occurs as well. Usually within half a day of overindulging in the sun, the characteristic red-lobster look of a sunburn begins to make itself known, and felt.


   



    Sometimes the cells with DNA mutated by the sun's rays turn into problem cells that don't die but keep proliferating as cancers. &quot;The UV light causes random damages in the DNA and DNA repair process such that cells acquire the ability to avoid dying,&quot; said Chuang.


   



    The result is skin cancer, the most common form of cancer in the United States. People who get sunburned repeatedly are at much higher risk. The risk for the deadliest form of skin cancer, called melanoma, doubles for someone who has received five or more sunburns, according to the Skin Cancer Foundation.


   



    Other UV sources


    A number of artificial sources have been devised for producing UV radiation. According to the Health Physics Society, &quot;Artificial sources include tanning booths, black lights, curing lamps, germicidal lamps, mercury vapor lamps, halogen lights, high-intensity discharge lamps, fluorescent and incandescent sources, and some types of lasers.&quot;


  Quality of Surgical Instruments
Posté par : tong12pp05 - Hier, 05:39 AM - Forum : Ark - Pas de réponse

Quality of Surgical Instruments

    Many surgeons will have encountered the scissors that would not cut, and the artery clip that comes off in a deep difficult location, but it would be reasonable to assume that new instruments should be of assured quality. This study reports the surprising findings of a local quality control exercise for new instruments supplied to a single trust.


    MATERIALS AND METHODS

    Between January 2004 and June 2004, all batches of new 5MM surgical instruments ordered by the Central Sterile Supplies Department of St Bartholomew's and the Royal London Hospitals were assessed by three clinical engineers, with reference to British Standards (BS) requirements.

    RESULTS


    Of 4800 instruments examined, 15% had potential problems. These included 116 with machining burrs and debris in the teeth of the tissue-holding regions, 71 defects of ratcheted instruments, 34 scissors with deficient cutting action, and 35 tissue forceps protruding guide pins. In addition, 254 instruments did not have a visible manufacturer's mark.


    CONCLUSIONS


    This study demonstrates the value of local quality control for&nbsp;10MM surgical instruments. This is of importance in an increasingly hazard-conscious environment, where there are concerns over instrument sterilisation, surgical glove puncture and the potential for transmission of blood-borne and prion diseases.


    A surgeon performing a surgical procedure should be able to assume that the instruments used are safe and reliable – particularly if they are new. To ensure the quality of these instruments, the Health Care Standards Policy Committee directed the British Standards Institution to produce requirements for the materials, design, dimensions and other features of surgical instruments. As a result, British Standards (BS), incorporating International Organisation of Standardisation (ISO) standards, were published.1 Each year, large numbers of new instruments are ordered by healthcare facilities across the UK, and those ordering them should be able to rely on these standards. This study reports the results of local quality control by the clinical engineering department of all new instruments supplied to a single NHS trust.



    Over a 6-month period between January 2004 and June 2004, all new batches of disposable surgical instruments delivered to the Barts and the London NHS Trust, from a variety of manufacturers, were assessed by three clinical engineers. The suppliers and manufacturers were informed beforehand. Where large numbers of identical instruments were delivered in a single batch, samples of these were examined as follows: 25–49 instruments 50%, 50–74 instruments 30%, 75–99 instruments 20%, and 100+ instruments 15%. In total, 4800 instruments were inspected, where necessary under magnification, for flaws as defined under BS quality assurance requirements.


   



    In total, 730 (15%) instruments failed the inspection. Table 1 shows the flaws that were identified. Figure 1 shows shows33 views of the jaws of vascular clamps: a well-finished instrument on the left, an instrument with machining burrs in the teeth in the middle view and right views. Figure 2 shows a crack in the securing screw of scissors on the left, a crack though the end of the jaws of a needle holder in the middle view, and a major soldering fault in the surface of a wire bending forcep on the right. Figure 3 demonstrates protrusion of a sharp guide pin on gentle closure of tissue forceps.


   



    The commonest fault identified was lack of a maker's mark. BS states that ‘the instrument shall be marked with the name or registered trade mark of the manufacturer or supplier’.1 This may seem like a minor infringement, but in fact it is highly important. If an instrument fails in service, it is essential that the supplier and manufacturer can be notified, so that any potential problem can be rectified, to ensure the safety of the patient and theatre staff. In addition, there is the question of liability and insurance.


   



    The commonest mechanical and structural fault was machining burr debris. BS states that ‘all surfaces must be free from pores, crevices and grinding marks’.1 The fine metallic surfaces of surgical instruments are the product of a number of engineering processes. The shapes and details are initially created by casting and pressing the metal into the required shape, but then finer detail is ground in. In modern, computer-controlled, laser-guided engineering, this should be a straightforward and reliable process, producing an extremely accurate surface, as is shown in the upper view of Figure 1. Sometimes, older methods are used but a fine finish should still be possible, as long as the surface is inspected and machine brush-polished. If this process is incomplete, metallic debris and surface imperfections will remain as shown in the middle view of Figure 1. This may be a problem in a number of ways. First, blood and tissue debris may collect in the imperfect surface. We have traditionally relied on sterilisation procedures to render such debris inert, but there are now concerns that prion disease may survive such processes.2 The metallic fragments may also wear off these surfaces, and remain as microscopic debris in the wound. Sharp burrs on instrument handles may contribute to previously unexplained surgical glove punctures. Although we cannot reference any reported instance of this, BS states that ‘there shall be no sharp edges other than those required by the pattern of the instrument’.1


   



    Cracks and soldering faults may also provide niches for retention of blood and tissue, and serious defects may lead to instrument failure, such as the examples shown in Figure 2.


   



    Every surgeon must have encountered the scissors that do not cut but, surprisingly, this can be a problem with new scissors. In this study, 34 scissors of various types did not meet the simple BS requirement, which describes how wet tissue paper (for fine dissecting scissors) and no. 18 gauze (for heavy tissue and suture scissors) must be cut cleanly and without tearing, for two-thirds of the length of the cutting blades.1


   



    Most surgeons will also know the problem of the artery clip which comes off in a deep, difficult location. Contrary to popular myth, most surgeons do criticise their own technique for such problems, but may now be surprised to find that we identified 71 ratchet problems in new artery clips. BS describes in detail how the racks of these instruments should function so that they ‘mate accurately when engaged’,1 and should not spring open when left closed for 3 h on a test wire of specified diameter. The instruments that failed our assessments all had racks that did not engage correctly, and sprang open when tested in this way.


   



    Many fine tissue forceps have guide pins to re-inforce the accuracy of the jaws mating. BS states that ‘if present, the guide pin shall be tapered to facilitate entry into the locating hole and shall not protrude from the hole when the jaws are closed’.1 We identified 35 guide pins which protruded on light, but complete, closure of the forceps’ jaws on naked eye inspection. This may be a source of glove puncture.


   



    The stainless steel alloy from which modern instruments are made must also conform to BS. Procedures are described to test corrosion resistance, but it surprised us to find visible corrosion on new instruments.


   



    This study demonstrates the value of local quality control for new gynecological instruments. We have found a significant number of cases where new instruments did not appear to meet appropriate standards, and have discussed the potential problems that may result. It must be stressed that we have not shown any specific instance of harm to a patient or staff through these defects. Suppliers were informed, and remedial action taken. All defective instruments were replaced and reexamined prior to entering service.


    The quality of surgical instruments is heavily relied upon by surgeons to perform procedures to the highest standard. Purchased from reputable suppliers, it is presumed that these instruments are of high quality and are usually put to use before any final user quality control is carried out. Problems arise, however, when these instruments do not live up to these expectations. Poor quality instruments fail and break and when used in an operation, the consequences to the patient can be disastrous. In the United States, the Food and Drug Administration (FDA) published an alert in 2008 stating that nearly 1000 incidents of retained pieces of broken instruments (unretrieved device fragments, UDFs) occurred each year, leading to a range of problems including local tissue reactions, infections, disability, and even death [1]. The alert also notes that with the increasing use of magnetic imaging modalities, the unrecognized presence of ferrous foreign bodies may cause tissue trauma or thermal injury.


   



    In the United Kingdom, Daly et al. reported on a pilot study of a Surgical Instruments Service to assess the quality of instruments purchased by the hospital, remove those unfit for purpose, and inform the manufacturer [2]. Brophy et al. found that 15% of surgical instruments examined by medical and mechanical engineers failed to meet the appropriate British Standards (BS) guidelines [3]. Instruments purchased from certain manufacturers failed at a rate of 35%, suggesting that potentially one in every three instruments in some sets are of substandard quality [4].


   



    In this study, we aim to define and examine the problem of UDFs by observing the contribution of poor quality surgical instruments to reports of patient safety incidents made to the National Reporting and Learning System (NRLS) of the National Health Service (NHS) in England and Wales. In doing so, evidence for the requirement of endoscope quality control can be determined.