Thursday, October 31, 2019

Haircare Limited Case Study Example | Topics and Well Written Essays - 2000 words

Haircare Limited - Case Study Example They include both present and future expected losses." (Cornell Law School) The torts may be classified as specific torts pertaining to trespass, assault, battery, negligence, products liability and intentional infliction of emotional distress. In the instant case study of Hair Care Limited this paper envisages analyzing the legal position with respect to the claim for damages by Joan, the claimant against the Hair colorant manufacturers Hair Care Limited and Barber, the proprietor of a hair dressing salon for the personal injury suffered by her by using the colourant manufactured by Hair Care Limited. The paper also discusses the liability of the hair dresser Barber towards compensating Joan for her injury and suffering. The instant case is because of the negligence of Barber, the hairdresser to advise the beautician or the customer about the potential danger of the usage of the colourant and the requirement that a patch test is to be conducted before use. Although, he was having knowledge of the potential injury he didn't care to advise the people concerned. Hence it is a case to be claimed by the plaintiff Joan under Tort arising out of negligence. The following is the discussion on some of the issues concerned with negligence and tort. Negligence is the most important tort in terms of cases and mon... Negligence may be defined as breach of a legal duty to take care, resulting in damage undesired by defendant to the plaintiff. Tortuous liability arises from the breach of the duty primarily fixed by law. Such duty is to care. In order to have negligence there: Must be a duty of care Breach of that duty Result in damage Breach of a Duty: "In many cases no duty of care has to be investigated because they are obvious. The question is therefore: is there s a breach of that duty. It is obviously a question of standards. The idea of reasonableness is linked with how people react." ("Jus" d'orange) Blyth v Birmingham Waterworks 1856: Negligence is the omission to do something which the reasonable man would do or not do something which the reasonable man would not do. Other factors in accessing that reasonable care are: Probability of the thing occurring Seriousness of the event at risk if it did happen Practicality and precautions Consideration of the social value of the defendant's activities Product Liability and Manufacturing Risks: A manufacturing defect will attract liability. In Grant v. Australian Knitting Mills (1935) Manufacturer of underwear out of wool. One of the products to clean wool is sulfate. Through an error, the garment bought by plaintiff was not washed off from sulfate. He was sensitive & washed them but not sufficient which resulted in huge dermatological reaction which almost killed him. Proceedings were brought against both the seller and the manufacturer. The manufacturer's defense was that it was the first instance they ever had any complaint and hence they should not be blamed. On the contrary they were really careful. The Privy Council said: one explanation is that an employee was negligent then it

Tuesday, October 29, 2019

HADRON COLLIDER Essay Example | Topics and Well Written Essays - 1750 words

HADRON COLLIDER - Essay Example The LHC is used by physicists in the recreation of conditions immediately after the Big Bang. This is done by the collision of the two beams at a very high energy. However, groups of physicists from allover the world then examine the particles that are formed as a result of the collisions by the use of particular detectors in several tests dedicated to it. There are several theories that describe the outcomes of these collisions. For long, the particle physics Standard Model has served scientists well as a way of comprehending the basis laws of Nature, however, it tells only part of the story. Only the trial data by the use of high energies that are arrived at by the LHC can be able to push the knowledge forward, thus challenging those that seek proof of the found knowledge, as well as those who tend to dream past the paradigm (Rossi 50).            Rumors from an internal memo has it that one of the CERNs Large Hadron Collider detectors has picked up some signals that could actually be the particle that has been sought after for a long period of time, which is known as the Higgs boson. Nevertheless, one of the major scientific objectives of the massive six billion Euros atom smasher was to demonstrate or confirm the Higgs bosons presence, a hypothetical practice that is believed to provide everything in the world or universe mass (Steiger&Uhl 73). Nevertheless, the particle is a critical part of the typical model that is used in physics in the description of the way in which atoms and particles are constituted. â€Å"Unconfirmed reports that physicians are currently working on the LHC indicated that the Higgs boson started circulating after a theoretical internal memo was published on the internet. Scientists quickly urged carefulness over the assertions as most particles candidates that appear in the colli sion tests at the LHC are

Sunday, October 27, 2019

Telecommunication in the 21st Century

Telecommunication in the 21st Century Telecommunication in the 21st century have improved over the decade by the introduction of better techniques through which signals can be transmitted from a transmitter through a medium to a receiver. These techniques have improved mobile communications, satellite transmissions and helped to improve data security. Some of these techniques are amplitude modulation (AM), frequency modulation (FM), sampling and link analysis (SLA) and PCM. The acronym PCM represents ‘Pulse-code modulation’, which is used for digitizing analogue data, for instance, audio signals. This is carried out by sampling analogue signals at uniform interval and then quantized to a series of symbols in a digital code (e.g. 10001).Its technically a way in which analogue signals are converted to digital form. PCM technique has its advantages; It makes processing of signals cheap since PCM is digital. It helps to filter off frequencies above the highest signal frequency. Pulse-code modulation has been a form used for some compact disc formats, digital video and for digital audio in computers. In PCM, there are series of processed to be followed; Filtering Sampling Quantizing Binary coding Companding Filtering This is a process where frequencies above the highest signal frequency are removed. The reason for this is that if this frequency is not removed, problems would occur when going to the next stage of sampling. Sampling This stage of the PCM is performed through PAM (pulse amplitude modulation).It answer the question of how signals change from one form to another (analogue to digital). It makes use of the original analog signal and uses it for the amplitude modulation of a pulse which has constant amplitude and frequency, this constant frequency is known as the sampling frequency (i.e. the number of samples per second ).The sampling frequency have to be more than the maximum frequency of the analogue signal. To work out the sampling rate, Nyquist theorem is used; â€Å"That in order to be able to reconstruct the original analogue signal, a minimum number of samples had to be taken†.It could be stated as: Fs > 2(BW) Fs = Sampling frequency BW = Bandwidth of original analog voice signal Quantizing and Coding This basically means the converting of each of the analogue sample into a discrete value (in the form of a binary code) that can be given a digital code word. It is done by assigning each sample a certain quantization interval. The instantaneous amplitude is been rounded off to certain levels, this thereby introduces some uncertainties (quantization noise).This is given by this expression; Number of levels = 2 ^ Bn (Bn is the number of bits used in the encoding) It was proven from the experiment that the higher the number of quantization levels the lesser the amount of quantizing noise. However this process of increasing the quantizing level to lower the quantizing noise introduces complexity into the system as the PCM system would need to be able to handle more code word. Companding It is a word derived from the combination of compressing and expanding. This is another stage in pulse–code modulation. It is a process of compressing a given analogue signal and this signal is expanded to its original size on getting to destination. In this process, the input signal is compressed into logarithmic segments and then quantized and coded. The more the signals increase the more the compression increases. Since the larger signals are compressed more than the smaller signals, the quantization noise increases. This indirectly keeps the SNR (signal to noise ratio) constant. EXPRERIMENTATION AND OBSERVATION Apparatus Oscilloscope PCM ENCODER module Connection cable The experiment was carried out by sending an input (analogue message) into the PCM ENCODER module. This input is constrained to a defined bandwidth and amplitude range in order to make sure the Nyquist criterion is observed. The PCM ENCODER module looks like the diagram below: A suitable encoding scheme for the analogue sample is selected. For example a 4-bit or 7-bit encoding scheme. The analogue signal is fed through the Vin. For this experiment, the clock rate us 8.33 kHz TTL signal from MASTER SIGNAL module. Time frame is also very essential as each binary word is located in a time frame. It’s 8 clock periods long and has 8 slots of equal length (i.e. 0 – 7). The LSB (consisting of 1’s and 0’s) are embedded in the encoder itself. This is useful in determining the location of each frame in the data stream. Initially the 4-bit linear coding scheme is selected and patched up with the 8.33 kHz TTL sample clock.CH-2A displays the clock signal on the oscilloscope. The display below shows a 4-bit PCM output for zero amplitude input; Quantization in PCM ENCODING is the next stage after sampling. The quantization level is rather transmitted instead of the sample value. The quantization levels are binary coded (i.e. binary ‘1’ in the presence of a pulse and binary ‘0’ in the absence of a pulse) RESULTS AND OBSERVATION The output of the variable DC is connected to Vin and sweeping the DC voltage slowly forward and backward shows discrete jumps in the data pattern, e.g. The maximum voltage is recorded as -2.51V.Also increasing the amplitude of the DC input signal looks like the diagram below; Changing the DC voltage from the maximum to minimum gave a range of binary code variations as listed below; The following measurements were later made after recording the quantizing levels and associated binary numbers; Sampling rate – 16.6 kHz Frame width – 950 µs Width of a data bit 120 µs Width of a data word 480 µs Number of quantizing level – 16 From the measurement above it could be concluded that the quantizing levels are linearly spaced .The same process would be applicable to a 7-bit linear encoding using the toggle switch on the front panel, though it would take longer than the 4-bit linear encoding done earlier. The Companding stage in a PCM is the process by which an analogue signal is been compressed at the source and then expanded back to its original size when it gets to its destination. During this process, the signal is compressed into segments which are quantized using uniform quantization. As the sample signal increases, the compression increases (i.e. the larger samples gets more compressed than the smaller samples). The standard of companding used in this experiment is the A-law .The equation is; Where A = 87.7 in Europe and X is the normalized integer to be compressed. RESULTS AND OBSERVATION The toggle switch is changed back to a 4-bit companding and the TIMs A4 companding law pre-selected is selected from the switch board. This gave the measurement below; In PCM decoding, the TIMs PCM DECODER module is used for decoding. This is the first operation in the receiver towards regenerating the received pulses. Amplitude of the pulse generated is the linear sum of all pulses in the coded word. In other to be able to recover the information on the PCM decoder, the knowledge of the sampling rate used to encode the signal is essential. RESULTS AND OBSERVATION The setup is similar to the earlier setup with CH-1A connected to the scope selector to the PCM output of the PCM ENCODER.A large negative DC is used for the message, the alternating ‘0’ and ‘1’ bits produced are measured to be 1920ms apart. The 4-bit linear decoding scheme is now selected to carry out the decoding process. The 8.33 kHz TTL signal is stolen from the transmitter and connected to the clock input. Time division multiplexing (TDM) is an alternative to the method of multiplexing using frequency sharing. Each channel is allocated a specific time slots, and each slots contain frames which must be repeated at the sampling rate. It can only be used for pulsed signals and not for analogue signals because they are continuous in time. The importance of TDM is that it enables many independent signals to be transmitted. RESULTS AND OBSERVATION A PCM TDM signal could be generated using PCM ENCODER; each driven by the same clock ( one the MASTER and the other SLAVE).Interconnecting in this way eliminates other frames and gives room for the two output to be added together to form the TDM signal. The display on the oscilloscope is shown below; The connection of the MASTER and the SLAVE generates the diagram below; Patching up the two PCM data outputs generates the display below; The next step which is shown below is to confirm that the frame synchronization bit is a ‘1’ for the MASTER and ‘0’ for the SLAVE The last stage of this experiment is to separate the two messages that have been multiplexed earlier. The PCM demodulator is patched up, with each module receiving the same clock stolen from the transmitter and each module also receives an external FS signal. The diagram below confirms the two messages have been recovered and appear at the correct outputs; CONCLUSION Pulse Code Modulation is however a very effective way of conveying audio signal by sampling the signal and transmitting binary coded pulse representing the sample values. It has emerged the most favored modulating scheme for transmitting analogue information such as voice and video signals. The advantages of PCM over the other forms of modulation (e.g. analogue modulation) are; PCM suppresses wideband noise. It is effective in the regeneration of the coded signal along the transmission path. It enables digital multiplexing. It enables the efficient exchange of increased channel bandwidth for improved signal-to-noise ratio. All these advantages however come at the expense of increased system complexity and increases channel bandwidth. REFERENCES http://www.webopedia.com/TERM/P/PCM.html [last accessed 25/03/08] http://cbdd.wsu.edu/kewlcontent/cdoutput/TR502/page13.htm [last accessed 25/03/08] http://www.cisco.com/warp/public/788/signalling/waveform_coding.pdf [last accessed 25/03/08] http://www.comlab.hut.fi/opetus/245/2004/09_PCM.ppt#20 [last accessed 25/03/08] Rodger E.Ziener and William H.Tranter, â€Å"Principles of Communication†, Chapter 3, John Wiley and sons, NY, 2002. Simon Haykin, â€Å"Communication Systems†, Chapter 3, John Wiley and sons, NY, 2001. David Petersen, â€Å"Audio, Video and Data Telecommunications†, Chapter 2, McGraw-Hill, Cambridge, 1992.

Friday, October 25, 2019

Freedom of Speech in Cyberspace -- Argumentative Persuasive Internet E

Freedom of Speech in Cyberspace: Government Restrictions on Content in the United States of America (this essay is missing the url of each source cited) "Information wants to be free. The Internet fosters free speech on a global scale."[1] Introduction The Internet provides a gateway for an individual to speak freely and anonymously without being targeted to what he or she said. With this said, one of the biggest issues concerning the Internet today is freedom of speech. The issue of free speech on the Internet has been a topic of discussion around the world within the past years. It is a unique communication medium and is powerful than the traditional media[2]. Because the Internet can not be compared equally to other mediums of communication, it deserves the utmost freedom of speech protection from the government. The restriction of speech on the Internet takes away from individual's rights and freedom from experiencing the Internet's benefits and uses. Information found on the Internet is endless and boundless and this poses the question, "should the government be allowed to regulate the information and content being transmitted or posted online?" Free Speech on the Internet Free speech on the Internet is a very controversial subject and has been the key problem surrounding the Internet today. The attempt to regulate and govern the Internet is still pursued by government officials. This subject has been intensified due to terrorist attacks against the United States and around world within the past years. The government believes that by regulating the Internet, it will protect the general public from criminal actions and eliminate the exposure of children to pornography or vulgar language. Senator Jim Exon of ... ...eved 27 April 2004. Rosenthal, Elisabeth. "China Lists Controls To Restrict the Use Of E-mail and Web." The New York Times, 27 January 2000. Retrieved 26 April 2004. Smith, Craig, "Tough New Rules Don't Faze Chinese Internet Start-Ups," The New York Times, 4 October 2000. Retrieved 26 April 2004. Stewart, William. Key Internet Features - Freedom of Speech. Living Internet. Retrieved 20 April 2004. Unknown Author. An Overview of the Communications Decency Act (CDA). Center for Democracy & Technology. Retrieved 26 April 2004. Unknown Author. Communications Decency Act. Electronic Privacy Information Center. Retrieved 26 April 2004. Unknown Author. Communications Decency Act. Wikipedia. Retrieved 25 April 2004. Unknown Author. Introduction to Ethics. Engineering Handbook. Retrieved 20 April 2004.

Thursday, October 24, 2019

Popularity of Juvenile Boot Camps

Juvenile Boot Camps are a rather new way of reintegration of the delinquent youth to the community and make them a useful and constructive part of it. This paper will discuss the effectiveness and popularity of juvenile boot camps. Effectiveness and Popularity of Juvenile Boot Camps: Juvenile Boot Camps have been introduced and proliferated in the United States in the 90s. The primary objective of creating these facilities is to develop an effective program, which would become efficiently helpful in rehabilitating the juvenile delinquent population.Though during these years the number of Juvenile Boot Camps has increased rapidly and nearly all the states have adopted some kind of juvenile, but the output is not hopeful. The recidivism rate is surprisingly high among the graduates of these programs. This disappointing progress and output is due to several factors. Lack of sound aftercare programs is the major problem because the delinquent juvenile after completing his rehabilitation program goes back into the same environment and faces the same problem again. If proper aftercare measures are not taken the juvenile will likely to recidivate.Another setback of the juvenile boot camps is the limited amount of delinquent youths it allows to join at a time. That is why these boot camps covers a very little portion of the juvenile population of the United States. Despite all these setbacks and disappointing performance the juvenile boot camps are very popular among the public and the political leadership. This is because juvenile crime is one of the major concerns of the country and the concerning authorities want to show some tangible actions taken in order to curb juvenile crimes.The public perceives these programs as an active and direct approach to curtail juvenile crimes. This is the main reason that despite all the failures the authorities are continuously increasing their spending on these boot camps and they have the consent of the public. References Tyler, J erry, Darville, Ray & Stalnaker, Kathi. (2001) Juvenile Boot Camps: A Descriptive Analysis of Program Diversity and Effectiveness, Stephen F. Austin State University, Nacogdoches, TX, USA.

Wednesday, October 23, 2019

Optimization of portfolio risk Essay

This paper is a critique one the article entitled â€Å"Post-Modern Portfolio Theory† by Swisher and Kasten (2005). The framework for this critique is to determine whether there ground to agree or disagree on the claims of the authors as against the evidence they presented. Swisher and Kasten asserted about the unreliability of Modern portfolio theory (MPT) and its mean-variance optimization (MVO) model for asset allocation in the financial services industry particularly on building portfolios. They instead suggested the use of post-modern portfolio theory (PMPT) which presents a new method of asset location that optimizes a portfolio based on return versus downside risk (down side optimization, or DRO) instead of MVO. What makes PMPT different from PMT as explained by the authors is its recognition that standard deviation as a poor proxy in relation to humans experience on risk. They are therefore arguing on the basis of behavioral finance which appears to be not evident under PMT in addition to mathematical implication of the model. They explained that risk is an emotional condition which may be â€Å"fear of a bad outcome such as fear of loss, fear of underperformance, or fear of failing to achieve a financial goal. † (Swisher and Kasten, 2005). They simply cannot equate risks with variability as they argue that risk is â€Å"more complex than simple variance† although they admitted that it (risk) may be modeled and described mathematically. The authors explained that downside risk (DR) is risk definition taken from three sub-measures which include downside frequency, mean downside deviation, and downside magnitude. They added that each sub-measure must be defined with reference to an investor-specific minimal acceptable return (MAR) (Swisher and Kasten, 2005) . In translating their mathematical analysis, Swisher and Kasten (2005) stated that â€Å"people like to make money, not lose† and the chance of making above-average amounts to money frequently is given a heavier weight as against a tiny chance of success in lottery. In other words, the investors would go where there are more chances so that it will be accepted as part of the chance when an investment may turn out to perform poorly but they would hope that it will not to perform very poorly or as often. Swisher and Kasten, (2005) acknowledged the elegance of the mathematical mode for describing the perfect investment although incidentally they found the model called modern portfolio theory (MPT) wrong. They acknowledged that the defect is the provision of inefficient portfolios which goes without saying the it is contradicting what it is supposed to accomplish, that is efficient asset allocation or building of portfolio. The same authors took courage in declaring that the primary reason why MPT produces inefficient portfolios is the erroneous act of equating standard deviation with risk. They explained that risk is something else, and there is a need to have a better mechanical framework to describe it. Their paper therefore suggested a better framework for building of better portfolios through downside risk optimization (DRO). They of course defined DRO as â€Å"optimization of portfolio risk versus return using downside risk as the definition of risk instead of standard deviation. † (Swisher and Kasten, 2005). To be more convincing in their proposed model for through the use of DRO under PMPT, Swisher and Kasten (2005) asserted that DRO is superior to MVO under MPT as an asset allocation tool using the following arguments in support: The first one is the possibility that standard deviation can lead to ridiculous results when used as a measure of risks while downside risk (DR) does the same more closely in capturing the human conception of risk as earlier defined. The second argument is the reality that financial asset returns do not behave as what a normal distribution tries to paint and assume, thus even if for the sake of argument volatility is taken as a perfect representation of risk, the result will still not work. The third one, which strengthened more their position, is the better performance of DRO over MVO when they demonstrated head to head comparison of portfolios (Swisher and Kasten, 2005). More importantly the result of the comparison the performance of the two produced conflicting results hence one could only be correct. This was in fact the authors’ basis in declaring that MPT produces inefficient portfolios. It can be concluded that there is basis to the claim of the authors that PMPT works better than MPT in building portfolios. They were able to demonstrate with evidence about their assertion by using DRO as against MVO in their comparison. There is basis to agree with the authors claim that PMPT points the way to an improved science of investing that incorporates not only DRO but also behavioral finance as against MPT. Given this reason and a number of failures caused by MPT in the past there is basis to give due credit to the proposal of the author for PMPT which justifies its use by investors and analysts as a better alternative in asset allocation. Reference: Swisher and Kasten (2005) Post-Modern Portfolio Theory, Journal of Financial Planning, {www document} URL http://www. fpanet. org/journal/articles/2005_Issues/jfp0905-art7. cfm, Accessed January 22,2008.