Friday, August 28, 2020
A comparison between cardiac CT scanning and cardiac digital subtraction angiography (DSA) The WritePass Journal
A correlation between cardiovascular CT filtering and heart advanced deduction angiography (DSA) Theoretical A correlation between cardiovascular CT filtering and heart advanced deduction angiography (DSA) ). This survey means to audit the writing on coronary CT checking and advanced deduction angiography, their clinical applications, strategies and near an incentive in coronary supply route evaluation and finding. Cardiovascular Digital deduction angiography Coronary angiography is the regular analytic technique utilized in coronary vein infection. It is an insignificantly intrusive strategy, whereby a catheter is put into the spiral or femoral corridor and is progressed through the blood vessel framework to the coronary courses. A differentiation specialist is then infused at the aortic root and permits perception of the supply routes utilizing x-beam progressively at up to 30 casings for each second. This permits a perspective on the degree, area and seriousness of coronary obstructive injuries, for example, atherosclerosis and empowers prognostic sign (Miller et al., 2008). Coronary angiography additionally empowers catheter situation either side of the sore to evaluate pressure changes and decides the level of stream obstacle (Miller et al., 2008). . Computerized deduction angiography (DSA) again works by bringing a complexity specialist into the coronary conduits and taking x-beams continuously, anyway a pre picture is taken by x-beam. This takes into consideration the post pictures to be deducted from the first veil picture, wiping out bone and delicate tissue pictures, which would somehow or another overlie the supply route under examination (Hasegawa, 1987). In contrast to regular angiography, it is conceivable to lead DSA by means of the venous framework, through getting to the predominant vena cava by means of the basillic vein (Myerowitz, 1982). This expels the dangers related with blood vessel cannulation (Mancini Higgins, 1985). The methodology can likewise be performed with a lower portion of differentiation operator and be accomplished all the more rapidly along these lines disposing of imperatives of utilizing an excess of complexity during a system (Myerowitz, 1982). While DSA is the highest quality level in blood vessel imaging of carotid conduit stenosis (Herzig et al., 2004), the utilization of DSA to the coronary supply routes is constrained because of movement antiquities related with every heartbeat and breath (Yamamoto et al., 2009). There are various cardiovascular clinical uses of DSA, it tends to be utilized to survey coronary blood stream (Molloi et al., 1996), valvular disgorging (Booth, Nissen DeMaria, 1985), heart stage (Katritsis et al., 1988), intrinsic heart shunts (Myerowitz, Swanson, Turnipseed, 1985), coronary detour unites and percutaneous coronary intercession results (Katritsis et al, 1988; Guthaner, Wexler Bradley, 1985). In any case, others have proposed that the coronary supply routes are not envisioned well because of their little size, development, their position overlying the opacified aorta and left ventricle, and disarray with different structures, for example, the aspiratory veins (Myerowitz, 1982). Cardiovascular CT Scanning Advancement of CT checking during the 1990s empowered an expansion in worldly goals that was adequate to see the thumping heart, and they currently give a non-intrusive procedure to analytic and prognostic purposes. Cardiovascular CT examines have clinical applications that go past perfusion examination, and can be utilized to survey structure and capacity of the heart (for instance in electrophysiology clutters or inherent coronary illness) because of its capacity to give anatomical detail (Achenbach Raggi, 2010). CT outputs can be utilized to survey coronary course infection with and without infusion of complexity specialist (Achenbach Raggi, 2010) by calcium sweep or CT angiography. Coronary calcium CT checking utilizes the proof base that coronary course calcium is an associate of atherosclerosis (Burke et al., 2003) and is a solid prognostic indicator of things to come improvement of coronary vein illness and heart occasions (Arad et al., 2000; Budoff et al., 2009; Achenbach Raggi, 2010). Calcium is handily portrayed on CT filter because of its high CT lessening, and is characterized by the Agatson score, which thinks about the thickness and zone of the calcification (Hoffman, Brady Muller, 2003). Coronary CT angiography (CTA) permits representation of the coronary corridor lumen to distinguish any atherosclerosis or stenosis inside the vessels. Patients are infused intravenously with a complexity specialist and afterward experience a CT check. There are impediments with respect to the reasonableness of patients for coronary CTA because of essentials of sinus musicality, low pulse and capacity to follow breath-holding orders. Moreover, heftiness presents an issue for patients that can't fit into the scanner and influences the exactness of the strategy. (Achenbach Raggi, 2010). Correlation of heart DSA and cardiovascular CT filtering The specialized contrasts between heart DSA and cardiovascular CT checking offer ascent to contrasts in the clinical signs for the methods, their analytic adequacy and furthermore various dangers or relative advantages to the patients. Because of the idea of the pictures delivered by coronary CTA and DSA, each fits various signs for use. While coronary DSA gives imaging of all parts of perfusion, CTA utilized with differentiate operator likewise gives this anyway has the extra preferred position of having the option to evaluate structure and capacity of the heart. Coronary CTA has been appeared to have a high precision at recognition and rejection of coronary corridor stenoses (Achenbach Raggi, 2010). In a multicentre preliminary led by Miller et al. (2008), patients experienced coronary calcium scoring and CT angiography preceding customary intrusive coronary angiography. The symptomatic exactness of coronary CTA at precluding or identifying coronary stenoses of half was appeared to have an affectability of 85% and a particularity of 90%. This demonstrated coronary CTA was especially powerful at precluding non-critical stenoses. Also, coronary CTA was demonstrated to be of equivalent adequacy as customary coronary angiography at distinguishing the patients that along these lines proceeded to have revascularisation by means of percutaneous mediation. This was appeared by a territory under the bend (AUC), a proportion of precision of 0.84 for coronary CTA and 0.82 for coronary angiography. Mill operator et al.’s (2008) study incorporated an enormous number of patients at various investigation destinations, and also spoke to a huge assortment of clinical patient qualities. The author’s guarantee that these variables add to the quality and legitimacy of the investigation discoveries, and propose that notwithstanding utilizing patients with clinical signs for anatomical coronary imaging, ought to be utilized as proof that coronary CTA is exact at recognizing ailment seriousness in coronary conduit sickness. Mill operator et al. (2008) did in any case,, locate that positive prescient and negative prescient estimations of coronary CTA were 91% and 83% individually and along these lines recommended that coronary CTA ought not be utilized instead of the more exact regular coronary angiography. A low positive prescient worth (corresponding to the pervasiveness of infection) was proposed to be because of a propensity to overestimate stenosis degree just as the nearness of antiquities prompting bogus positive translation (Achenbach Raggi, 2010). Other exploration giving correlation between coronary CTA and ordinary coronary angiogram has featured changeability in results. A meta-examination directed by Gorenoi, Schonermark and Hagen (2012) explored the symptomatic capacities of coronary CTA and intrusive coronary angiography utilizing intracoronary pressure estimation as the reference standard. The creators found that CT coronary angiography had a more noteworthy affectability than obtrusive coronary angiography (80% versus 67%), implying that coronary CTA was bound to distinguish practically important coronary course stenoses in patients. In spite of this,, particularity of coronary CTA was 67%, contrasted with 75% in intrusive coronary angiography, implying that the method was less powerful at accurately barring non-analyze than obtrusive coronary angiogram. This exploration seems to repudiate the intensity of heart CTA at barring conclusions of coronary course stenosis as recommended by Miller et al. (2008), he study comb ined proof from more than 44 examinations to give their outcomes and along these lines had an enormous measurable force. The creators decipher the outcomes considering the clinical significance of cardiovascular imaging, proposing that patients with a higher pretest plausibility of coronary illness will probably require obtrusive coronary angiography for revascularisation demonstrating that coronary CTA might be a useful method in those patients with a middle of the road pre-test likelihood of coronary illness that will along these lines not require intrusive angiography. Goldberg et al. (1986) examined the viability of DSA in contrast with traditional coronary angiography in 77 patients. They found that the two angiograms concurred inside one evaluation of seriousness in 84% of single cases and 90% of different cases, distinguishing both patent and lesioned veins. The outcomes drove the creators to presume that there was no critical contrast between the two strategies and that DSA could be utilized in specific coronary angiography to discover results practically identical to that of regular angiography. Notwithstanding being a little report into the viability of DSA, the investigation likewise had a few wellsprings of natural inconstancy that ought to be viewed as when deciphering the outcomes. These included contrasting sizes of computerized imaging screen and non-utilization of calipers, implying that the translation of the pictures could change all through the investigation. The creators likewise propose that while indicating solid help for the ut ilization of DSA in coronary course illness, the procedure may not really grant better prognostic conclusions or clinical decisions that are superior to ordinary angiography, and in this manner the further execution of the methods may
Saturday, August 22, 2020
Teaching Helen Keller Essay example -- Learning Education
The Truth About Helen Keller In Learning Dynamics, the creators, Marjorie Ford and Jon Ford, decide to incorporate a selection from The Story of My Life by Helen Keller to show gaining as a matter of fact. The passage titled The Most Important Day of My Life chiefly draws from Helen Keller's youth as she starts her training on the third of March in 1887, a quarter of a year prior to she became seven years of age. Keller describes her initial encounters of being stirred to a universe of words and ideas through the splendid showing strategies for her instructor, Anne Sullivan. Sullivan showed Keller new jargon by spelling words into the little youngster's hand. From the start, she doesn't comprehend the importance of each word, yet in the long run figure out how to interface a word with the physical item it speaks to. Sullivan regularly left Keller to invest a lot of energy in nature as an approach to build up her detects. In time, Keller finds the physical world, yet in addition a universe of elusive ideas, thoug hts, pictures and feelings. Moreover, she contributes quite a bit of her figuring out how to Anne Sullivan, which she expressed, I fell that her being is indivisible from my own, and that an incredible strides are in hers. All the best of me has a place with her. Understanding that words could be assembled to inspire a psychological picture, Helen Keller can paint numerous visual pictures in the perusers' brains through her extraordinary and expressive use of graceful language. Her composing style catches both her feeling and encounters. She expresses, Have you at any point been adrift in a thick haze, when it appeared as though a substantial white murkiness shut you in and the extraordinary boat, tense and on edge, grabbed her way toward the shore with plunge and sounding-line and you trusted that something will occur? He... ...ucation doesn't stop at W-A-T-E-R, yet she went on to colleges and educated numerous different dialects too. Keller makes a solid contention that her succeed is a consequence of her instructor, Anne Sullivan, My educator is so close to me that I hardly consider myself separated from her. Even the Fords expressed, Anne Sullivan indicated her (Keller) that adoration and learning are personally associated. Keller is an exceptional individual not on the grounds that she beats visual deficiency or deafness rather she ought to be extraordinary for her commitment to accomplish social changes. Helen Keller ought to be acknowledged for her trustworthiness in understanding that she was benefit to instruction, and utilizations her insight and knowledge to help those less lucky. Works Cited Portage, Marjorie, and Jon Ford. Learning Dynamics (Streamlines : Selected Readings on Single Topics). Belmont: Wadsworth Publishing, 1997.
Friday, August 21, 2020
The significance of power relations in communication in social work The WritePass Journal
The hugeness of intensity relations in correspondence in social work Presentation The hugeness of intensity relations in correspondence in social work IntroductionPower relations in social workParadigm move for more prominent effectivenessEffective communicationHindrances to viable communicationConclusionReferencesRelated Presentation Powerful correspondence is significant in all circles of human movement, in the exchange between human instinct or individual office and society or social structure. In such manner, the casual collaborations structure the premise of social work and powerful correspondence assists facilitators with relating better with subjects (Koprowska, 2008). Social work alludes to multi-disciplinary undertakings that try to improve the personal satisfaction and prosperity of people, gatherings or networks through mediations in the interest of those burdened with destitution, genuine or saw social treacheries and infringement on their human rights. Intercessions could be through such instruments as research, direct practice, instruction, approach and network sorting out (Trevithick, 2010). Social impact or potentially control are ideas that allude to the methods through which people’s emotions, contemplations, conduct and appearance are controlled in social frameworks. This is accomplished basically through socialization, the impact on one’s conclusions, conduct and feelings by others through similarity, peer pressure, socialization, administration and influence (Trevithick, 2010). Through this, people relate to the social system’s qualities and standards and accordingly get stake in the support of these standards and qualities. This capacity to impact the conduct of individuals is characterized as force (Bar-On, 2002). Differences in power between a social specialist and the administration client frequently bring about the entrenchment of segregation, mistreatment and non-inclusion by and by. This paper investigates the importance of intensity relations in correspondence in social work and commitment to the separation and mistreatment and manners by which this can be tested including improved interest and inclusion, particularly through compelling correspondence. Force relations in social work Social work is intrinsically political and is in this way about force and, along these lines, it is basic that social specialists comprehend the impacts of intensity inside the structures wherein they work and the general public by and large (Bar-On, 2002). The term power is frequently utilized conversely with power which is seen as genuine by the social structure, the designed social courses of action present in the public eye that both rise up out of and decide the activities of people (Bar-On, 2002). The sociological assessment of intensity is worried about the revelation and portrayal of relative qualities, regardless of whether equivalent or inconsistent, steady or subject to change. Given that it isn't inborn, and that it tends to be conceded to other people, force can be procured through the ownership or control of a type of intensity money which include: formal authority appointed to a holder of a position (genuine influence); authority got from specific aptitudes or skill (master influence); limit with respect to the utilization of contrary impacts, for example, dangers and discipline (coercive influence); just as the capacity to offer prizes and in this way to use control over subjects (reward influence); and the capacity of the influence wielder to draw in others and construct faithfulness (referent influence) (Bar-On, 2002). Lukes (1974), in building up the three-dimensional model of intensity, contends that force is socially and socially situated with the socially designed and socially organized conduct or practices of gatherings or establishments altogether continuing the predisposition in the framework undeniably more than the arrangement of individual activities. In this manner, there is a dormant logical inconsistency of interests between those practicing power and those influenced, whose genuine interests are rejected. This contention challenges the perspectives dependent on the possibility of aggregate assent fronted by Arendts correspondences hypothesis and the Webarian perspective on genuine force which disperse the view that there is potential for frailty in social associations. This has suggestions for social work including the view that social specialists practice power however in numerous occurrences are uninformed of the employed force, and that it is fundamental to analyze the situation of social laborers as it might influence what they see as their job (Bar-On, 2002). This situation fuels weakness with an individual (administration client) consenting to an activity because of the social structure of intensity which places expert on the social situation of the expert as opposed to on the understanding or accord between the two gatherings. This is uplifted in such occasions as the utilization of coercive force natural in the genuine intensity of the social laborer presented by legal enactment (Askheim, 2003). Trials in brain research propose that the more force used by an individual, the less they can take the point of view of others, suggesting that they have less sympathy. It was likewise noticed that diminished force is identified with upgraded imperative and restraint (Bar-On, 2002). Generalizations and partialities characteristic in social structure and culture in this way stay unchallenged bringing about conceivable separation, mistreatment or avoidances of areas of society or people requiring administration (Thompson, 1993; Trevithick, 2010). The post-pioneer perspective on power fronted by Michael Foucault (1980) gives a focal job to correspondence and information in the comprehension of intensity inside society. The core value of innovation fortifies existing force structures in this manner expanding the status of experts, putting an incentive on proficient information and underestimating neighborhood or oppressed knowledge.â This center is what is alluded to as expert talk (Foucault, 1980). This rejection is an impression of the basic force irregularity inside cultural structure, and the legitimization of information exhibiting the connection between the prohibition for proficient talk and persecution (Pease, 2002). Clients of administrations or customers regularly feel that social work attempts are, in such manner, improper or unfeeling toward their necessities. Social specialists possess an extraordinary situation in the public arena working for both the administration client and the benefit of society in general. This regularly brings about pressures between loyalties to support clients and to support offices or open specialists. Frequently, social specialists recognize feeling feeble in their dealings with specialist organizations, however their legal forces cause them to accept they are excessively ground-breaking. In this, a fascinating Catch 22 emerges in the polarity of intensity where social laborers are regularly considered as either incapable or amazingly powerful (Pease, 2002). The dichotomous perspective on power is regularly exacerbated inside social work because of the contradicting structure between the specialist and the customer which powers the laborer into the amazing position, controlling and coordinating the game-plan frequently in a one-dimensional system, while the customer is constrained into the situation of weakness (Askheim, 2003). This is apparent in the way that in spite of the decade-long appropriation of the counter abusive practice topic to manage instructing and practice of social work, beneficiaries of such practice (customers) have not been altogether engaged with conversations in regards to the advancement of such enemy of abusive practice (Pease, 2002). Change in perspective for more prominent adequacy There is requirement for social specialists to comprehend their situation inside the overarching power structures, just as to comprehend why they feel weak in their work (Pease, 2002). This would empower the difficult of structures that propagate persecution and the advancement of arrangements that guide the battle of negative impacts of intensity differentials on the clients of administration (Askheim, 2003). Contrasts in power must be considered and new systems acquainted so as with improve correspondence and, thusly, relations which would then be able to cultivate the viable direct of social work inside networks. The way to assessing the force by social specialists in their work and relations with administration clients is strengthening. This involves the redistribution of information and the uprising of different types of information which have been ignored and enslaved instead of spotlight on proficient information as the main real structure (Pease, 2002). This would require the move from such innovator center towards an increasingly basic way to deal with challenge the predominant talk and to scrutinize the association among information and force empowering the improvement of impact through social work moves toward that intend to encourage social change and change. Through this, the consideration of administration clients in social procedures, proficient talk and advancement of training is legitimized (Askheim, 2003). This offers a progressively sensible way to deal with the test of winning force structures that will in general propagate and upgrade separation or mistreatment of administration clients joining different measurements including the persecuted, just as the social specialists. All together for social work to gain from its relations with administration clients and associations (specialist organizations), it is fundamental for their more prominent contribution to guarantee genuine strengthening, adjusting between gains from ability and the strengthening of people engaged with different parts of social work (Askheim, 2003; Pease, 2002). Strengthening would lessen the imbalances in power relations in social work just as trying resulting mistreatment and segregation. It would likewise empower the arrangement of important expert associations with network associations taking into account gaining from the two encounters and aptitude. For huge change towards strengthening, there is requirement for spotlight and accentuation on social procedures which urge social laborers to tune in to the tales of administration clients, externalizing
Tuesday, May 26, 2020
Gmat Awa Essay Samples - Find The Best Online College Essay Sample For You
Gmat Awa Essay Samples - Find The Best Online College Essay Sample For YouIf you're reading this article then I'm pretty sure you're wanting to know how to find Gmat Awa essay samples. This is an incredibly important question because this is the first step that you will need to take to help you study and learn how to get into the University of California. Although you have a lot of different choices for colleges, it is very important that you choose the one that will provide you with the best education and research experience.The reason that UC Berkeley is such a prestigious institution is that they offer very good research opportunities. There are many different courses available and you will be able to pursue many different fields. You can also choose to go on to medical school or to business school.These courses are also very helpful in keeping you motivated. The courses at UC Berkeley are extremely important because they provide you with everything that you need to succeed. The c ourses are designed to challenge you and keep you learning. These classes allow you to learn new things and they are the ones that help you grow as a person.Some of the best colleges out there are also extremely challenging. They don't necessarily have tons of material to cover, but they are also not easy to go to school for. These schools focus on research and teaching. UC Berkeley is the perfect example of a school that will give you everything that you need in order to succeed in life.What you need to do to find these Gmat Awa essay samples is to take the time to apply to the schools that you really want to attend. Be honest with yourself and ask yourself what you want out of college. The type of student that you are will play a huge role in which school you end up attending.Make sure that you read all of the university's policies and make sure that you understand them. You also need to be aware of any fees that are required to be paid. The UC Berkeley application is incredibly c omplicated so be sure that you read the entire application before you send it.These Gmat Awa essay samples are critical to your success. If you can't afford the tuition and tutoring, don't worry because there are a lot of scholarships that you can get as well. There are a lot of different ways that you can help you get into college, but taking the time to learn how to find Gmat Awa essay samples is a great first step.
Saturday, May 16, 2020
Diffusion of Responsibility Definition and Examples in Psychology
What causes people to intervene and help others? Psychologists have found that people are sometimes less likely to help out when there are others present, a phenomenon known as the bystander effect. One reason the bystander effect occurs is due to diffusion of responsibility: when others are around who could also help, people may feel less responsible for helping. Key Takeaways: Diffusion of Responsibility Diffusion of responsibility occurs when people feel less responsibility for taking action in a given situation, because there are other people who could also be responsible for taking action.In a famous study on diffusion of responsibility, people were less likely to help someone having a seizure when they believed there were others present who also could have helped.Diffusion of responsibility is especially likely to happen in relatively ambiguous situations. Famous Research on Diffusion of Responsibility In 1968, researchers John Darley and Bibb Latanà © published a famous study on diffusion of responsibility in emergency situations. In part, their study was conducted to better understand the 1964 murder of Kitty Genovese, which had captured the public’s attention. When Kitty was attacked while walking home from work, The New York Times reported that dozens of people witnessed the attack, but didn’t take action to help Kitty. While people were shocked that so many people could have witnessed the event without doing something, Darley and Latanà © suspected that people might actually be less likely to take action when there are others present. According to the researchers, people may feel less of a sense of individual responsibility when other people who could also help are present. They may also assume that someone else has already taken action, especially if they can’t see how others have responded. In fact, one of the people who heard Kitty Genovese being attacked said that she assumed others had already reported what was happening. In their famous 1968 study, Darley and Latanà © had research participants engage in a group discussion over an intercom (in actuality, there was only one real participant, and the other speakers in the discussion were actually pre-recorded tapes). Each participant was seated in a separate room, so they couldn’t see the others in the study. One speaker mentioned having a history of seizures and seemed to begin having a seizure during the study session. Crucially, the researchers were interested in seeing whether participants would leave their study room and let the experimenter know that another participant was having a seizure. In some versions of the study, participants believed that there were only two people in the discussionâ€â€themselves and the person having the seizure. In this case, they were very likely to go find help for the other person (85% of them went to go get help while the participant was still having the seizure, and everyone reported it before the experimental session ended). However, when the participants believed that they were in groups of sixâ€â€that is, when they thought there were four other people who could also report the seizureâ€â€they were less likely to get help: only 31% of participants reported the emergency while the seizure was happening, and only 62% reported it by the end of the experiment. In another condition, in which participants were in groups of three, the rate of helping was in between the rates of helping in the two- and six-person groups. In other words, participants were less likely to go get help for someone having a medical emergency when they be lieved that there were others present who could also go get help for the person. Diffusion of Responsibility in Everyday Life We often think about diffusion of responsibility in the context of emergency situations. However, it can occur in everyday situations as well. For example, diffusion of responsibility could explain why you might not put in as much effort on a group project as you would on an individual project (because your classmates are also responsible for doing the work). It can also explain why sharing chores with roommates can be difficult: you might be tempted to just leave those dishes in the sink, especially if you can’t remember whether you were the person who last used them. In other words, diffusion of responsibility isn’t just something that occurs in emergencies: it occurs in our daily lives as well. Why We Don’t Help In emergencies, why would we be less likely to help if there are others present? One reason is that emergency situations are sometimes ambiguous. If we aren’t sure whether there’s actually an emergency (especially if the other people present seem unconcerned about what is happening), we might be concerned about the potential embarrassment from causing a â€Å"false alarm†if it turns out that there was no actual emergency. We may also fail to intervene if it’s not clear how we can help. For example, Kevin Cook, who has written about some of the misconceptions surrounding Kitty Genovese’s murder, points out that there wasn’t a centralized 911 system that people could call to report emergencies in 1964. In other words, people may want to helpâ€â€but they may not be sure whether they should or how their help can be most effective. In fact, in the famous study by Darley and Latanà ©, the researchers reported that the participants who didn’t help appeared nervous, suggesting that they felt conflicted about how to respond to the situation. In situations like these, being unsure of how to reactâ€â€combined with the lower sense of personal responsibilityâ€â€can lead to inaction. Does the Bystander Effect Always Occur? In a 2011 meta-analysis (a study that combines the results of previous research projects), Peter Fischer and colleagues sought to determine how strong the bystander effect is, and under which conditions it occurs. When they combined the results of previous research studies (totaling over 7,000 participants), they found evidence for the bystander effect. On average, the presence of bystanders reduced the likelihood that the participant would intervene to help, and the bystander effect was even greater when there are more people present to witness a particular event. However, importantly, they found that there may actually be some context where the presence of others doesn’t make us less likely to help. In particular, when intervening in a situation was especially likely to be dangerous for the helper, the bystander effect was reduced (and in some cases, even reversed). The researchers suggest that, in particularly dangerous situations, people may see other bystanders as a potential source of support. For example, if helping in an emergency situation could threaten your physical safety (e.g. helping someone who is being attacked), you’re probably likely to consider whether the other bystanders can help you in your efforts. In other words, while the presence of others usually leads to less helping, this isn’t necessarily always the case. How We Can Increase Helping In the years since initial research on the bystander effect and diffusion of responsibility, people have looked for ways to increase helping. Rosemary Sword and Philip Zimbardo wrote that one way of doing this is to give people individual responsibilities in an emergency situation: if you need help or see someone else who does, assign specific tasks to each bystander (e.g. single out one person and have them call 911, and single out another person and ask them to provide first aid). Because the bystander effect occurs when people feel a diffusion of responsibility and are unsure of how to react, one way to increase helping is to make it clear how people can help. Sources and Additional Reading: Darley, John M., and Bibb Latanà ©. Bystander Intervention in Emergencies: Diffusion of Responsibility. Journal of Personality and Social Psychology 8.4 (1968): 377-383. https://psycnet.apa.org/record/1968-08862-001Fischer, Peter, et al. The bystander-effect: A meta-analytic review on bystander intervention in dangerous and non-dangerous emergencies. Psychological Bulletin 137.4 (2011): 517-537. https://psycnet.apa.org/record/2011-08829-001Gilovich, Thomas, Dacher Keltner, and Richard E. Nisbett. Social Psychology. 1st edition, W.W. Norton Company, 2006.Latanà ©, Bibb, and John M. Darley. Group inhibition of bystander intervention in emergencies. Journal of Personality and Social Psychology 10.3 (1968): 215-221. https://psycnet.apa.org/record/1969-03938-001â€Å"What Really Happened The Night Kitty Genovese Was Murdered?†NPR: All Things Considered (2014, Mar. 3). https://www.npr.org/2014/03/03/284002294/what-really-happened-the-night-kitty-genovese-was-mu rderedSword, Rosemary K.M. and Philip Zimbardo. â€Å"The Bystander Effect.†Psychology Today (2015, Feb. 27). https://www.psychologytoday.com/us/blog/the-time-cure/201502/the-bystander-effect
Wednesday, May 6, 2020
Pride and Prejudice - 1236 Words
The path to marriage initiates in the very first paragraph of Jane Austen’s Pride and Prejudice. This courtship novel begins with the premise that â€Å"a single man in possession of a fortune must be in want of a wife†(pg. 5) Throughout the competition for the single men, characters are naturally divided by the norms of their social standing. However, the use of social conventions and civility further divides them. The characters in need of the most moral reform remain unchanged, leaving a path for the reformers to travel to each other’s company. Austen uses the stagnant characters and their flaws as a line that needs to cross in order to achieve a dynamic marriage of mutual respect. Three of the Bennet daughters get married in the novel.†¦show more content†¦When Elizabeth visits, they must escape him by not walking around the gardens, allowing Charlotte to easily show her the house without interruption. Elizabeth notes that the house has a pleasant air when Mr. Collins can be forgotten (157). The consequences of a marriage to someone so silly are convenience and avoidance. These marriages to Wickham and Collins portray alternate realities for Elizabeth. If she accepts either of these men, she denies herself growth as a character. A process of elimination permits Elizabeth to to continue on a path towards her ultimate match, Mr. Darcy. These two characters must overcome their prejudices to achieve the ideal marriage. As previously stated, Elizabeth needs to hold her tongue and use her judgment more cautiously. Jane best explains this after the night they meet Mr. Bingley, â€Å"I would wish not to be hasty in censuring any one; but I always speak what I think†(16). Jane defends her own character by revealing Elizabeth’s hasty nature to attack others. Again, Darcy is proud and holds grudges. He explains himself when he says, â€Å"I cannot forget the follies and vices of others so soon as I ought, nor their offences against myself†¦My temper would perhaps be called resentful†(58). He stays true to his convictions in his interactions with other characters, but his mode of relaying these feelings must change in order to catch Elizabeth. If they can achieve moral reform, their personalities will compliment eachShow MoreRelated Essay on Prejudice and Pride in Pride and Prejudice1535 Words  | 7 PagesPrejudice and Pride in Pride and Prejudice        In any literary work the title and introduction make at least some allusion to the important events of the novel. With Pride and Prejudice, Austen takes this convention to the extreme, designing all of the first and some of the second half of the novel after the title and the first sentence. The concepts of pride, prejudice, and universally acknowledged truth (51), as well as the interpretation of those concepts, are the central focus ofRead MorePride And Prejudice By Pride Essay990 Words  | 4 PagesThe Effects of Pride Pride is the feeling of satisfaction when someone achieve something, someone close to you achieves something, or something somebody owns or has is admired by others. Being proud of yourself or someone else is not always bad. However, some believe pride is negative and can change how a person thinks and feels about certain things. It can be taken either way depending on who, why, and when it is. Many people have written articles and have done research to determine whether itRead More The roles of pride and prejudice in Pride and Prejudice1404 Words  | 6 PagesBecoming an immediate success in the contemporary novel public in early nineteenth century, Pride and Prejudice has proved to be the most popular of Jane Austens novels and remains a classic masterpiece two centuries later. The title itself describes the underlying theme of the book. Pride and prejudice, intimately related in the novel, serve as challenges to the cherished love story of Darcy and Elizabeth. It is interesti ng to see how these two nice people were blinded before realizing that theyRead MorePride and Prejudice1472 Words  | 6 PagesElizabeth#8217;s Pride and Darcy#8217;s Prejudice? Jane Austen#8217;s Pride and Prejudice is a timeless social comedy which is both satirical and full of sentiment. The title refers to the personalities of the two main characters and cues the reader to Austen#8217;s broader thematic purpose: to satirize nineteenth century manners and morals, especially as they relate to courtship and manners. Although both characters contain both these traits, it is mainly Mr. Darcy who exemplifies #8216;pride#8217;Read More Pride and the Prejudice 1543 Words  | 7 Pagesâ€Å"The power of doing anything with quickness is always prized much by the possessor, and often without any attention to the imperfection of the performance.†(1) Said Mr Darcy. This is one of the worlds most popular novels, Jane Austens Pride and Prejudice has charmed readers since its publication of the story of the amusing Elizabeth Bennet and her relationship with the aristocrat Fitzwilliam Darcy. During this essay it will explore the construction of characters , in particular it will be ElizabethRead MorePride and Prejudice2105 Words  | 9 Pagesrelationship, although back then divorce was never thought of either, where as today it is not rare at all. In these marriages, money was the only consideration. Love was left out, with the thought that it would develop as the years went by. In Pride and Prejudice, Jane Austen comments that marriage in her time is a financial contract, where love is strictly a matter of chance. This is clearly evident from the very first line of the novel. Charlotte Lucas states that happiness in marriage is entirelyRead MorePride and Prejudice1906 Words  | 8 PagesPride and Prejudice tells a story of a young girl in the midst of a very materialistic society. Jane Austen uses the setting to dramatize the restraints women had to endure in society. As the novel develops, we see how women have to act in a way according to their gender, social class, and family lineage. Elizabeth Bennet’s sisters rep resent the proper societal lady while Lizzy is the rebel. Through her characters Austen shows how a women’s happiness came second to the comfort of wealth. As the plotRead MoreIrony in Pride and Prejudice995 Words  | 4 PagesIrony in Pride and Prejudice Irony forms the alma mater of Jane Austen’s novels. Likewise, â€Å"Pride and Prejudice†is steeped in irony of theme, situation, character, and narration. Austen uses it to establish the contrast between appearance and reality. As one examines â€Å"Pride and Prejudice†, one discovers the ironic significance of how pride leads to prejudice and prejudice invites pride. Importantly, the novel elucidates how both â€Å"Pride†and â€Å"Prejudice†have their corresponding virtues bound upRead MorePride And Prejudice Essay1715 Words  | 7 Pagesnovel ‘Pride and Prejudice’ addresses many themes and motifs, but one of the more prevalent is pride itself, which Austen expertly weaved throughout, showing the effects it has on both the perpetrator’s lives and the lives of those around them. This theme of pride relates to the time period the novel was written in, which was the Regency era, where the class system was deemed of significant importance and particular traits of the gentry were considered necessary, including an extent of pride. AustenRead MoreReview Of Pride And Prejudice 1557 Words  | 7 PagesPride and Prejudice, Jane Austen’s critically acclaimed n ovel, is renowned for the complicated dynamic between its two main characters, Elizabeth Bennett and Fitzwilliam Darcy. Although she does eventually fall for him, Elizabeth’s feelings towards Darcy for the first half of the story are vehemently negative, with no detectable amount of affection. Her unequivocal distaste for him plays a major part in her character arc as it slowly gives way to fondness. But in Simon Langston’s film adaption of
Tuesday, May 5, 2020
Law Essay free essay sample
This is about artistic freedom and basic rights of free expression, which need to be available to all, whether they have money and lawyers or not. †–Shepard Fairey â€Å"The journalism that AP and other organizations produce is vital to democracy. To continue to provide it, news organizations must protect their intellectual property rights as vigorously as they have historically fought to protect the First Amendment. †–Press Release, Associated Press INTRODUCTION During the 2008 campaign, an image featuring then-presidential candidate Barack Obama’s photo became the subject of a legal dispute that continued long after the election ended. Amidst the presidential debates, another debate was brewingâ€â€between a famous visual artist, Shepard Fairey, and a major newsgathering agency, the Associated Press (AP). An AP photographer, Mannie Garcia, took the picture of the presidential hopeful, which Fairey popularized on posters that he emblazoned with the word â€Å"Hope. †Once it was determined that Fairey had used AP photographer Mannie Garcia’s image of presidential candidate Obama in his posters, the issue in Fairey v. We will write a custom essay sample on Law Essay or any similar topic specifically for you Do Not WasteYour Time HIRE WRITER Only 13.90 / page Associated Press was whether Fairey’s use of the photo constituted â€Å"fair use,†an affirmative defense under the Copyright Act. If so, Fairey’s â€Å"fair use†would excuse the copyright infringement and Fairey would not have to pay. If not, Fairey would be liable for copyright infringement and would likely have to pay damages. Although Fairey settled the lawsuit with the AP in January 2011, another lawsuit was still pendingâ€â€that of the AP against Fairey’s clothing company, â€Å"Obey Clothing†and other clothing stores (Urban Outfitters, Nordstrom, and Zumiez) for copyright infringement. The parties, however, settled their claims in March 2011. In the settlement agreements, the parties explicitly stated that they still maintain their legal positions in the case. Thus, the dispute about whether Fairey’s use of the photo constituted fair use has never been resolved. Although the settlement agreement stated that the AP and Obey Clothing agreed to share future profits from sales of the Obama image on merchandise, the underlying issue is still very much alive. The case between Fairey and the AP is certainly timely and addresses copyright in the context of news photos. This issue will continue to be relevant given that President Obama is the likely Democratic candidate for the 2012 presidential election, and it is certainly possible that other businesses will seek to capitalize on Garcia’s photo. Not only may businesses seek to capitalize on this image, but the Obama campaign itself may look to exploit the image, because the image became so iconic in the 2008 election. Moreover, as opposed to prior case law concerning appropriation of art, this set of facts incorporates new media. â€Å"It has become especially important in an era when digital technology allows artists to, with the press of a few buttons, use other people’s finished products as raw material for new works. †Fair use case law can certainly be applied to cases in the digital area. The best way to predict the outcome of the AP suit against Fairey’s company is to understand how the court might have ruled in the original caseâ€â€that of the AP against Fairey personally. This Essay will explore whether Fairey’s use of the AP Photographer’s photo constituted â€Å"fair use†and will analyze how the relevant fair use cases would bear on the present case. The AP originally asked to be credited and to receive compensation. First, I will introduce and explain the fair use four-factor approach laid out in section 107 of the Copyright Act. Second, I will discuss how fair use case law, such as Rogers v. Koons, Campbell v. Acuf -Rose Music, Inc. , Harper amp; Row Publishers, Inc. v. Nation Enterprises, Dr. Seuss Enterprises v. Penguin Books, Leibovitz v. Paramount Pictures Corp. , enhances our understanding of these factors. Finally, this Essay will analyze the Obama Hope Poster case in the context of the four factors and arrive at a conclusion based on case law and public policy. Key Terms 1. Copyrights 2. Moral rights of copyrights 3. Economic right of copyrights 4. The copyright Act of 1976 on the United States 5. World Intellectual Property Organization (WIPO) 6. The Patent Cooperation Treaty (PCT) 7. Industrial design 8. The Hague System 9. Copyright Agreement 10. Federal Law of Copyright 11. Industrial Property Law 12. ASCAP 13. The Berne Convention for the Protection of Literary and Artistic Works 14. Instituto Nacional del Derecho de Autor (INDAUTOR) 15. International Trademark Registration (Madrid System) 16. Tariffs 17. Industrial Drawing 18. Brand piracy 19. Registered trademark 20. Natural person 21. Certification marks 22. Collective trademarks 23. Defensive trademarks 24. Generalized trademark 25. Trademark look 26.
Wednesday, April 15, 2020
Waiting for Icarus The Faces of Love an Example of the Topic Literature Essays by
Waiting for Icarus: The Faces of Love Introduction Waiting is an inevitable part of the daily life. One has to go to many kinds of waiting: for an examination result, for a friends answer to a written letter, for a call to a boss, for the daily allowance or salary, for the pizza delivery, for the lovely evening, or for the beautiful sunrise. Waiting is essential, as the old saying goes. As the famous author and poet, Ralph Waldo Emerson once asked in his writing: how much is time does one spent in his lifetime by merely waiting? But of all the waiting that one has done for all his lifetime, isnt the waiting for the special someone the most romantic yet tedious waiting to do? Need essay sample on "Waiting for Icarus: The Faces of Love" topic? We will write a custom essay sample specifically for you Proceed :Who wants to write assignment for me?Professional writers suggest:Order Papers On Essaylab.ComEssay Helper Online Writing Paper Online To Write Cheap Writing Service Reviews Cheap Custom Writing Service Henry Van Dyke, an American short story writer, once said this famous line. Time is too slow for those who wait, too swift for those who fear, too long for those who grieve, too short for those who rejoice, but for those who love, time is eternity (Van Dyke 51). Can love really wait for eternity? This question is the same question that Muriel Rukeyser asks in her poem Waiting for Icarus. Several views about love and waiting can be seen throughout this poem, most of which reflects the reality of loving, the reality of life. Review of Related Literature. An essay written about this poem and posted in peerpapers.com relates waiting and loving very well. It says that because the narrator of the poem, most possibly Icarus girlfriend, loves Icarus greatly, she does not notice how time pass by easily. As a true lover, Icarus girlfriend does not even notice time. We can also relate this statement to the quote from Van Dyke mentioned earlier, because love is too great, it is definitely willing to wait until eternity. However, love, as seen through the speaker of the poem, lies beyond waiting alone. Besides waiting, she also shows signs of worrying. The article suggests that She is terribly worried about Icarus More than the idea of love and waiting, the poem, according to the article, also shows concern and fear. The essay expresses how in loving, one will be constantly wondering about the welfare of the loved one. Lastly, this first essay reflects the relationship between loving and longing. It says: it is easy to see how much she misses her lover The speaker of the poem really shows, through her words, how much she longs for Icarus. She waits as much as she can because she longs for her love, and as long as she loves Icarus, she can wait until eternity. Another essay, posted in megaessays.com, shows how promises are significant in loving. According to the essay, the speaker of the poem gets close to her lover because of the promises that his lover gave her. While she is waiting, she reminisces about the promises he made to her and about how he confided his dreams and ambitions to her. These promises and confidences made her feel close and special to him. In this passage, we can infer two ideas about love according to the authors analysis of the poem. First, Icarus lover is able wait for Icarus for such a long time because she feeds her mind of the promises that Icarus gave her. It means that the essential part in waiting for a loved ones lies within the memories shared. Second, the promises is essential in loving, it serves a s the fuel of love, just like in the poem. Icarus lover got close to Icarus because of the promises given to her. The essay also suggest another angle of the story: the angle of the Man, referred to as Icarus in the poem. The article says In the title Waiting for Icarus, Icarus symbolizes men who run away from their relationships. It suggests that Icarus does not necessarily refer to the man in the myth, Icarus, who flew with his father, Daedalus, but has disobeyed his father and drowned in the ocean. Instead, Icarus reflects every man who makes promises to their loved ones, not necessarily a girl, but in the end, running away from those promises. In a conclusion, this second essay reflects that, in the relationship where the speaker is in, Icarus is the man who escaped from his responsibility and the speaker is the girl who is left waiting and holding on to broken promises. Critical Perspective: Formalist Perspective Analyzing the story from the formalist point of view, we can infer that the speaker in the poem represents all of those who wait for their lovers and Icarus in the poem represents all that is loved. The setting of the poem is in the beach where the girl is prompt by Icarus to wait for him. Because the girl is unsure whether she is waiting for Icarus for one whole day, we can infer that the time could have been late in the afternoon or just before the sun sets. The characters in the poem are the speaker, preferably a girl, and someone he is waiting for, could have been her lover, and as the title suggests, a lover of the name of Icarus. The girl, having no particular name, can easily be inferred to reflect all the girls who love. The poem can be spoken by any kind of girl who loves, has been given promises, waits, and suddenly realizes that they could have been waiting in vain all those times. The one, who is waited, on the other hand, may s ymbolize all those who is loved. The name of the character in the title, Icarus, is an allusion from the Greek mythology. Icarus is the son of the famous inventor, Daedalus. Two of them are jailed in the tower of the labyrinth. In order to escape prison and death, Daedalus created wings made from wax and bird feathers. They used these wings to soar to the sky and fly away from their prison. However, because the wings are made up of wax alone, Daedalus warned his son not to go too near from the sun because the heat might melt the wax and the wings might loose its feathers. Icarus did not heed to his fathers warnings and instead, enjoyed flying too much that he flew as close to the sun as possible. Because of this, his wings got broken and he drowned in the sea. This story can be inferred to reflect love. Icarus, the lover, ventures on love too much that by the time he realizes that he is flying too close to what he can only handle, it is too late and he is already falling down into the big sea of hopelessness. The language used in the story is conversational English. It resembles an entry of a girls diary or a stream of consciousness that runs in her thought. Since the speaker uses informal language such as cringe before his father and a trashy lot, the reader gets to feel that the speaker is talking to them. They can easy grasp the feeling that the speaker wants to imply. Moreover, they can easily relate to what the speaker is feeling because the words used are conversational and can easily be understood. The meaning that can be conveyed in the poem is direct. It reflects the girl expressing his sentiments about his lover who promises her a lot of things but has left her empty handed. However, different readers of the poem can have different understanding of the poem, depending on how deep they can venture between the words used in the poem. Different levels of meaning lies in the poem, depending on how readers understand it. Analysis The poem Waiting for Icarus is a story that reflects several faces of love. It reflects waiting, hoping, and realizing. Waiting. The poem, as superficially seen, is about a girl who waits for her lover. I have been waiting all day, or perhaps longer. The girl who loves waits, and faithfully waits as her love tells him too. This, is the first face of love, waiting faithfully and honestly. Hoping. The speaker in the poem has too many hopes and wishes that can be seen in the poem. First of these are the hopes that all those that her family and friends say about her love is false. He wishes that Icarus proves false what her mother said to her that he only wants to get away from her and that inventors, like Icarus are, do not keep their promises. I remember they said he only wanted to get away from me I remember mother saying: Inventors are like poets, a trashy lot. She also hopes that the girls who make fun of her, thinking that she is waiting in vain, will be proved false. Lastly, she wished for a chance to try what her lover had tried. When she said "I would have liked to try those wings myself," it can be inferred that she wants to have the courage to try other things, have other experiences, rather that locking herself to a love that she is not sure if is worth her waiting. "He said he would be back and we'd drink wine together." This line is full of hopes from the speaker of the girl for his lover to return and bring back the happy times that both of them has shared. She holds on to the promise that once her lover has returned, things are already better and that "everything would be better than before." "He said we were on the edge of a new relation." This line foreshadows the affection their relationship. By saying that their love is on the edge of a new relation, it can mean only two ideas: either their relationship will improve or is better to end that way. "He said he would never again cringe before his father." This, is the start of the many promises that her lover has given her. Her mere listing of these promises creates an illusion that she hopes that these promises will become realities once her loved one has returned. The lines that followed are addition to the list of promises. "He said that he was going to invent full-time/ He said he loved me that going into me." "He said was going into the world and the sky." In this line, the fault of the one being waited can be reflected. He said he was going somewhere, fulfilling some dreams that the girl is not included. However, the girl continues to hope and trust her love. "He said all the buckles were very firm/ He said the wax was the best wax " In these lines, it can be inferred that the boy is quite sure of where he is going. He is sure that whatever it is that he is about to do, he will accomplish it very well. He is too proud of himself. "He said Wait for me here on the beach." Here is another promise that the boy has given: a promise that he will return while the next line, "He said Just don't cry" expresses that the boy still loves the girl, still cares for her. Finally, love is more of realizations. And there are numerous realizations that can be inferred from this poem. "I remember the gulls and the waves I remember the islands going dark on the sea" This shows how long the girl has waited, probably all her life, until the sun sets and darkens the hopes of the girl. When all the lights and the hope that the speaker of the poem has vanished, she realizes that the boy will never return anymore. "I remember the girls laughing I remember they said he only wanted to get away from me I remember mother saying : Inventors are like poets, a trashy lot I remember she told me those who try out inventions are worse" This lines add up to the realizations of the girl that she could have been wrong. That the boy she loved with all her life will never return. And finally, she realized: "I remember she added : Women who love such are the Worst of all" that she stupid for believing in everything that the guy said, just because she loves too much. "I have been waiting all day, or perhaps longer. I would have liked to try those wings myself. It would have been better than this." This reflects the girls decision to embark on her own journey, to start anew, take wings, and search again for a new life, a new love that could have been better than what they have in the present. Conclusion In the end, it can be concluded that there are many faces of love as there are many faces in this earth. Each of us loves as unique as how our individualities are, but however we love, we cannot escape the fact that once in our life, we wait, we hope, and we realize a lot. Works Cited Thingexist.com. July 24, 2008. Thinexist. July 24, 2008. Van Dyke, Henry. The Poems of Henry Van Dyke. New York: Hard Press, 2006. Rukeyser, Muriel. Collected Poems Of Muriel Rukeyser. Pittsburg: U of Pittsburg, 2005. Peerpapers.com. July 24, 2008. Peerpaper. July 24, 2008. Mega Essays.com. July 24, 2008. Mega Essays LLC. July 24, 2008. Caraway, James. Mediterranean Perspectives: Literature, Social Studies and Philosophy. Buckinghamshire. 2000.
Thursday, March 12, 2020
Creating Delphi Components Dynamically (at run-time)
Creating Delphi Components Dynamically (at run-time) Most often when programming in Delphi you dont need to dynamically create a component. If you drop a component on a form, Delphi handles the component creation automatically when the form is created. This article will cover the correct way to programmatically create components at run-time. Dynamic Component Creation There are two ways to dynamically create components. One way is to make a form (or some other TComponent) the owner of the new component. This is a common practice when building composite components where a visual container creates and owns the subcomponents. Doing so will ensure that the newly-created component is destroyed when the owning component is destroyed. To create an instance (object) of a class, you call its Create method. The Create constructor is a class method, as opposed to virtually all other methods you’ll encounter in Delphi programming, which are object methods. For example, the TComponent declares the Create constructor as follows: constructor Create(AOwner: TComponent) ; virtual; Dynamic Creation with OwnersHeres an example of dynamic creation, where Self is a TComponent or TComponent descendant (e.g., an instance of a TForm): with TTimer.Create(Self) dobeginInterval : 1000;Enabled : False;OnTimer : MyTimerEventHandler;end; Dynamic Creation with an Explicit Call to FreeThe second way to create a component is to use nil as the owner. Note that if you do this, you must also explicitly free the object you create as soon as you no longer need it (or youll produce a memory leak). Heres an example of using nil as the owner: with TTable.Create(nil) dotryDataBaseName : MyAlias;TableName : MyTable;Open;Edit;FieldByName(Busy).AsBoolean : True;Post;finallyFree;end; Dynamic Creation and Object ReferencesIt is possible to enhance the two previous examples by assigning the result of the Create call to a variable local to the method or belonging to the class. This is often desirable when references to the component need to be used later, or when scoping problems potentially caused by With blocks need to be avoided. Heres the TTimer creation code from above, using a field variable as a reference to the instantiated TTimer object: FTimer : TTimer.Create(Self) ;with FTimer dobeginInterval : 1000;Enabled : False;OnTimer : MyInternalTimerEventHandler;end; In this example FTimer is a private field variable of the form or visual container (or whatever Self is). When accessing the FTimer variable from methods in this class, it is a very good idea to check to see if the reference is valid before using it. This is done using Delphis Assigned function: if Assigned(FTimer) then FTimer.Enabled : True; Dynamic Creation and Object References without OwnersA variation on this is to create the component with no owner, but maintain the reference for later destruction. The construction code for the TTimer would look like this: FTimer : TTimer.Create(nil) ;with FTimer dobegin...end; And the destruction code (presumably in the forms destructor) would look something like this: FTimer.Free;FTimer : nil;(*Or use FreeAndNil (FTimer) procedure, which frees an object reference and replaces the reference with nil.*) Setting the object reference to nil is critical when freeing objects. The call to Free first checks to see if the object reference is nil or not, and if it isnt, it calls the objects destructor Destroy. Dynamic Creation and Local Object References without Owners Heres the TTable creation code from above, using a local variable as a reference to the instantiated TTable object: localTable : TTable.Create(nil) ;trywith localTable dobeginDataBaseName : MyAlias;TableName : MyTable;end;...// Later, if we want to explicitly specify scope:localTable.Open;localTable.Edit;localTable.FieldByName(Busy).AsBoolean : True;localTable.Post;finallylocalTable.Free;localTable : nil;end; In the example above, localTable is a local variable declared in the same method containing this code. Note that after freeing any object, in general it is a very good idea to set the reference to nil. A Word of Warning IMPORTANT: Do not mix a call to Free with passing a valid owner to the constructor. All of the previous techniques will work and are valid, but the following should never occur in your code: with TTable.Create(self) dotry...finallyFree;end; The code example above introduces unnecessary performance hits, impacts memory slightly, and has the potential to introduce hard to find bugs. Find out why. Note: If a dynamically created component has an owner (specified by the AOwner parameter of the Create constructor), then that owner is responsible for destroying the component. Otherwise, you must explicitly call Free when you no longer need the component. Article originally written by Mark Miller A test program was created in Delphi to time the dynamic creation of 1000 components with varying initial component counts. The test program appears at the bottom of this page. The chart shows a set of results from the test program, comparing the time it takes to create components both with owners and without. Note that this is only a portion of the hit. A similar performance delay can be expected when destroying components. The time to dynamically create components with owners is 1200% to 107960% slower than that to create components without owners, depending on the number of components on the form and the component being created. The Test Program Warning: This test program does not track and free components that are created without owners. By not tracking and freeing these components, times measured for the dynamic creation code more accurately reflect the real time to dynamically create a component. Download Source Code Warning! If you want to dynamically instantiate a Delphi component and explicitly free it sometime later, always pass nil as the owner. Failure to do so can introduce unnecessary risk, as well as performance and code maintenance problems. Read the A warning on dynamically instantiating Delphi components article to learn more...
Tuesday, February 25, 2020
Best Workplace Practices that contribute to high performance Essay
Best Workplace Practices that contribute to high performance - Essay Example Data security therefore becomes very important aspect of running the business. Data security ensures all information related to the company and its services is constantly safeguarded from enemies and destructive forces, and every employee at every level is responsible for Data security at all times. Data security at our organization is ensured through a variety of ways such as training and orientation, policies and procedures, safety and security measures. One of the strongest and fool-proof measures followed is the Restricted Access Practice (RAP), which ensures data, either in the form of electronic or printed, is not transferred outside the company’s domain. For this, employees’ access to electronic data is limited to only official work domains; this means no employee can either send or receive information to or from any external sources, respectively. Employees are not allowed to carry any form of printed material outside the organization’s premises. This access is limited to only one department, which liaises with external entities, like the US Government bodies and external vendors, whenever required. Moreover, all electronic information, either in the form of applications, data, programs etc, created by the employees will be company’s property and for its use thereby protecting any form of breach. Any form of breach of this practice is dealt immediately with strict disciplinary action. The RAP has been extremely helpful in safeguarding company’s and its clients’ information thereby increasing its credibility and reliability. Moreover, this practice eliminates or mitigates any form of selfish intentions of employees from taking any undue advantage of the organization’s resources, information and data. Thereby, this practice also helps in orienting all employees towards the company’s goals. For the business, this practice has helped in earning credibility of its largest client, the US
Saturday, February 8, 2020
Mid-term history exam Essay Example | Topics and Well Written Essays - 750 words - 3
Mid-term history exam - Essay Example In fact, fabrics made in the home with techniques that remained largely unchanged since the Middle Ages. The machines used within the home to make textile fabrics were small and either hand-powered or powered by hand. The Industrial Revolution, however, replaced these hand-powered machines with coal and put the manufacturing responsibilities in the hands of a centralized factory system (Backer). These coal-powered technologies, along with the steam engine, are the most commonly cited cause of the Industrial Revolution (Hudson). James Watt’s development of the steam engine allowed the transformation of fuel into mechanical work, which quickly became a staple instrument in a variety of different industries including powering locomotives, ships, textile machines, and automobiles. However, other explanations may aid in explaining why the Revolution occurred. One theory states that capitalism is responsible for the Revolution, insofar as capitalism incited merchants to take more co ntrol over their workers. When workers were paid a piecework rate in a factory, as opposed to the home, workers would produce more in order to have a better lifestyle. Centralization of material production into factories was the inevitable result of the capitalist system (Backer). Another theory looks at the differences in scientific knowledge between countries and tries to look at the Revolution in terms of what countries and cultures were able to think â€Å"mechanically†(Backer). In need, one of the first countries capable of such â€Å"mechanical†thinking was Great Britain, which is commonly believed to have been the first country to industrialize. In the case of England, science and dissemination of practical scientific knowledge played a large role. At that time, the new science of Newton was clearly associated with applied science. Those scientists disseminated their knowledge to an interested public for commercial and practical reasons through talks like the famous Boyle lectures and by various scientific societies like the Royal Society of London (Hudson). In many ways, the development of science in England and the development of industrialization in England were inextricably tied together. â€Å"By the end of the century it was simply assumed that the mechanization of manufacturing, and hence of labor, required a working knowledge of Newtonian science†(Jacob 167). Also, the concentration of knowledge into the limited land mass of the British isle may also have played a role in contributing to industrialization. Even though England was a source of new scientific knowledge, it would have been difficult to disseminate that knowledge if the country was less densely populated like continental Europe (Jacob 160-163). The Industrial Revolution left a number of social effects on England throughout the rest of the 19th and 20th centuries. For one, it led to the birth of the modern factory and, consequently, the modern city that develop ed around the factories. These factory towns brought in employees from all of the country looking for opportunities in the new industrialized world. A negative consequence of this was, of course, child labor. Child mortality rates increased throughout the industrialization period because parents would send their children off to dangerous employment in specialized tasks within the factories (Hudson). Although child labor existed prior to industrialization, it became a present phenomenon in society, in which children as young as four
Thursday, January 30, 2020
Philosophy metaphysics Essay Example for Free
Philosophy metaphysics Essay In order to clearly answer the first question, it is important first to answer the question – â€Å"what is the soul for Aristotle†and as such give an account of how he views substance and separability. Aristotle posits in de Anima that the soul is the substance in the sense which corresponds to the definitive formula of a things essence. That means that it is â€Å"the essential whatness’ of a body of the character just assigned. (Book II, 412b). As such, the soul is the essence of being and the essence of being is its substance. By being, Aristotle refers to the thing itself while by essence he refers to the primary essence of the thing itself wherein one is treated as the subject in its own right i. e. the good itself is treated as the essence of the good. It can be deduced then, using hypothetical syllogism that if soul is the essence of a being and the essence of being is its substance, then the soul is the substance of a being. He argued further that whatever is has a being, whatever has a being has a substance – this as the grounding of his epistemology. Hence, whatever is has a substance. This implies then that being is identical to substance. If such is the case, then using the principle of excluded middle, being is also identical to soul. Now, let us elucidate the concept of separability. Aristotle first distinguished the difference between the body and the soul. The body as he stated corresponds to what exists in potentiality, it being the subject or matter of a possible actuality. Soul, on the other hand, is a substance (actuality) in the sense of the form of a natural body having life potentially within it; it is the actuality of the body. Aristotle, Book II, 421b) As he delineates the dissimilarity between the body and soul, one should not be mislead in regarding the two as separate entities. They are at some point seems to be separate for in the former we are talking about a corporeal body in its spatio-temporal existence while in the latter we are talking of an incorporeal body transcending in the spatio-temporal world. However, their separability in terms of space and time does not mean they are separate as whole – that is an entity having life. As Aristotle argues â€Å"the soul is inseparable from its body, or at any rate that certain parts of it are (if it has parts) for the actuality of some of them is nothing but the actualities of their bodily parts†. (Aristotle, Book II, 413a). He argues further that â€Å"body cannot be the actuality of the soul; it is the soul which is the actuality of a certain kind of body. Hence the soul cannot be without a body, while it cannot be a body; it is not a body but something relative to a body. That is why it is in a body and a body of a definite kind†. (Book I, 421a). It can be deduced then that soul and the body are inseparable with each other. It is because the essence of both their existence lies in the interdependency of their telos – the soul actualizing the potential life in the body while the body providing an entity for the soul to actualize itself in the material world. Since the soul is the actuality of natural body, then naturally it would have certain functions which it can actualize. Aristotle has identified these functions to be the following: (1. ) powers of self-nutrition or the nutritive function; (2. powers of sensation which includes the sensory and appetitive function; (3. ) the power of movement and rest or the locomotive function and (4. ) the power of thinking. With these functions, he posited a psychic power of hierarchy. He claimed that of the psychic powers mentioned above, some kinds of beings posses all of these, some possess less than all while others posses only one. As such, evidently, the plants possess the p ower of self-nutrition wherein they can grow up or down and increase or decrease in all direction as long they can find nutrients in the soil. It is through their own means that they continue tolive. Even though the plants possess only one function of the soul, it is a great wonder how they continuously subsist on their own. Next is the power of sensation, which is possessed by all animals. All animals possessed the power of sensation because they all have the primary form of sense, which is touch. Aristotle defended and further elaborated this notion in de Anima. To wit: if any order of living things has the sensory, it must also have the appetitive; for appetite is the genus of which desire, passion, and wish are the species; now all animals have one sense at least, viz. ouch, and whatever has a sense has the capacity for pleasure and pain and therefore has pleasant and painful objects present to it, and wherever these are present, there is desire, for desire is just appetition of what is pleasant. (BookII, 414b) From the arguments stated above, it can be evidently inferred not just how Aristotle proven that all animals possess at least one sense, the touch, but also how he sci entifically deduced that all animals by virtue of their sensory function, possess appetitive function, too. From all these animals, there are some which possessed the power of locomotion, advancing them to a higher stratum. These are animals which can execute any kind of movements together with the capacity to halt such movement. Lastly, the human beings possessed all of the above-mentioned functions placing them on the top of the hierarchy. They posses the power of thinking, which is the essential feature of the human beings and which separates them apart from all other species. Analyzing the theoretical framework Aristotle succumbed to, it can be construed then that for him every being has a soul. This is evidently manifested in his attempt to prove the groundings of his epistemology extending his claim to the psychic hierarchy wherein he posited that every kind of living thing – any entity for that matter possesses certain function/s of the soul It should be put in mind, however, that even Aristotle posited the different functions of the soul; they are in essence, inseparable. An example of this is the function of nutrition (by eating) which human beings in particular do in order to properly and clearly think. The latter being also a function of the soul. Evidently, every function of the soul is interconnected with each other especially in the case of the Homo sapiens, who possessed all the enumerated functions of the soul. Aristotle notions of intellect can be rooted in his conception of knowledge – in his epistemology. It is from his conception of knowledge arises his other assertions on how he views the world. It is common sensical then to claim that his conception of the mind or any other things transcending from their spatio-temporal existence, his metaphysics, is grounded on his epistemology. As such, it is with utmost importance to first answer how Aristotle regards the nature of knowledge and how does one able to acquire knowledge so as to provide an answer on his notion of intellect. Knowledge for him can only be found within the material world – that is things, which are intelligible by senses. It is then through our experience with this objects in their spatio-temporal existence that we come to know them. He mentioned the processes of how we can arrive to know these objects – by perception, discrimination and thinking. By perception here, I mean the process of how our senses operate to recognize things in the material word. Discrimination then comes simultaneous with perception in order to give a concrete description of the thing being perceived. In example, upon the perception of a certain plant, we can able to distinguish its structure and other ontical features as the mind started to categorized. As a corollary, we arrived at the conclusion that what we perceived is indeed a plant. From there, we judged that what we perceived is indeed a plant and hence, arriving in the state of thinking. It can be deduced then that through thinking, one can able to comprehend the ontical features of an object and by virtue one’s reason, its primary essence. By primary essence, I mean the telos or the end itself of a thing. Since reason for Aristotle is innate in human beings so is intellect. It is because for Aristotle, reason is an essential property of the mind – that is of the intellect. If that is the case, then reason for Aristotle is relatively tantamount to the intellect. Husserl, on the other hand regarded the process of intuition as the first level of cognition wherein the objects are grasp in its original thru experience. This is also the case when one is cognizing objects of mere representations which includes but not limited to pictorial intuitions and any means of symbolic indications. To wit, experiencing is consciousness that intuits something and values it to be actual; experiencing is intrinsically characterized as consciousness of the natural object in question and of it as the original: there is consciousness of the original as being there in person. The same thing can be expressed by saying that objects would be nothing at all for the cognizing subject if they did not appear to him, if he had of them no phenomenon. Here, therefore, phenomenon signifies a certain content that intrinsically inhabits the intuitive consciousness in question and is the substrate for its actuality valuation. (Husserl, p. 3) It is only but logical to infer that experience plays a vital role in the cognition of a certain object. As such, it is only upon experience, can one theorized and moved to a higher level of cognition. A thing must first be intuited before one can theorize about them. And after theorizing, comes the process of reflection. Evidently, both Aristotle and Husserl believed in the value of experience in which the former calls perception and the latter intuition. From these processes arises higher forms of cognition wherein the end result for Aristotle is thinking through the use of reason while for Husserl, it is pure reflection as a result of phenomenology. It is then with utmost importance to first clarify, what does Husserl meant by intellect and Ego. As such, in what process does a person uses his intellect. Furthermore, what is the difference of reflection from pure reflection and of the empirical Ego to the transcendental Ego? Also, one should answer the question â€Å"what is phenomenology? †and â€Å"why it is only through this process one can arrive at pure reflection? †For Husserl, intellect is identical with consciousness as Ego is identical to Self. As such, when one speaks of intellect, one is referring to consciousness and vice-versa. Such is also the case with the Ego and the Self. Reflection is the process wherein one is looking not towards the act of reflection itself but rather in the direction of the objects one is conscious of. As such, one is absorbed in reflecting how these objects exist rather than asking how they come into being or essentially, enquiring on their primordial existence. If the consciousness is moving towards this kind of reflection, then the Ego is only in his/her ontical (empirical) status. Pure reflection, on the other hand, is the process wherein the consciousness is reflecting his consciousness – that is the act of reflection per se. This is the case wherein the Ego transcends from his ontical stage by describing the events i. e. relating, referring, combining, et al in his consciousness. And this can only be done thru the process of phenomenology. What is phenomenology then? Phenomenology is defined as the science of consciousness. (Husserl, p. 5) It is the process of describing the things and events themselves in their primordial sense through the use of phenomenological reduction. Phenomenological reduction then is the process wherein one suspends his/her preconceived notion of things in order to objectively describe the objects and events as what it appears to them. It only thru this process that we can arrive at pure reflection because this is the only method wherein objects and events are describe as themselves without concurring to any established principle or assumption. Evidently, Aristotle’s notion of intellect and Husserl’s notion of Ego posited the strength of mind in general – transcending from space and time. If that is the case, then the conception of a person is not only confined within the physical realm – that is he can do things beyond the limit of his physical existence in his journey to unravel the primordial existence of objects and any discipline for that matter. However, what sets them apart from each other is their notion on how one can really grasp the ontological state of an object or in the words of Kant –their intentionality. Aristotle believed that one can only know the ontological state of a thing by referring to its primary essence, its telos as the context clue in able to grasp the object’s primary essence. For Husserl, on the other hand, it is only through the use of phenomenological method can one comprehend the ontological state of objects. In Being and Time, Heidegger attempted to know the meaning of a Being – that is the Dasein, by starting to ask and redefine the fundamental question of â€Å"What is a Being? †He further continued this method by asking the ontological question of Being – that only a being can know his Being because he is consciousness to his Being by his being. His starting point is the fact that a being is a Being-in-the-World. He is a being situated in this world. As such, it is only him who can know his being by virtue of his ontic-ontological character. If that is the case, then it is only him who can determine his possibilities by virtue of being a spatio-temporal entity. Since no other entities can determine his possibilities as a being conscious of his existence, then the Dasein solely can ascertain his existentiall. It can be deduced then that the task of Dasein is to transcend to his existentiell in order to arrive at his ontological status. He can only do this by maximizing his possibilities to know himself thru the things which are ready-at-hand – things which can help him to reveal his being to him. It should be kept in mind that this process of knowing the Dasein does not go in hermeneutic circles rather on a back and forth condition Dasein as a spatio-temporal entity is facing a hard time to know his being because there is a tendency that he might be too absorb in his world or fall. Yet what Heidegger wants to emphasize is that he as a Dasein should not conceive his being as a spatio-temporal entity an encumbrance to his Being. It is because it is only through this world he can have his possibilities. This separates him from other entities and makes him a Dasein. Evidently, Heidegger’s notion of Dasein greatly gives importance to the relationship of the Being and the world which is also apparent in Aristotle notion of intellect and Husserl’s notion of Ego. However, what separates the former from the latter is that it focused on providing an answer on how one can transcend to his facticity in order to ontologically know his Being. The latter, on the other hand, focuses in discovering the essence and the ontological existence of the objects in the material world. Transcendental phenomenology is defined in general as the study of essence. It designates two things: a new kind of descriptive method which made a breakthrough in philosophy at the turn of the century, and an a priori science derived from it; a science which is intended to supply the basic instrument for a rigorously scientific philosophy and, in its consequent application, to make possible a methodical reform of all the sciences. (Husserl, p. 15) Essentially, transcendental phenomenology then is a description of phenomena. Husserl, then, laid down the method to achieve the objective of reforming all the sciences. The first step is the use of phenomenological epoche or reduction or bracketing wherein one suspends or take away all his/her biases and prejudices in order to â€Å"objectively describe†a phenomena. By doing this, we can arrive at a universal description of a phenomena. This will be followed by the compare and contrast method which one will have to undertake in order to arrive at the pure data of things. It appears then that by suspending one’s judgment and undergoing the intersubjectivity test, we can arrive at the â€Å"pure data of things†. In relation to this, Husserl claims that this method should be followed by all sciences in order to answer their primordial condition. It is held that sciences cannot escape their dogmas because it fails to question how they come to be. What they are just doing is a mere adaptation of established principles proven in the past to be true. Since these established principles were proven in the past to be true, scientists or people who work in the sciences do not make any attempt to further verify the truthfulness of their established principles – that is how and why is it the case that such principles were held to be true. For indisputably, things cannot just come into being without any rationalization, scientific explanation for that matter. Sciences have constructed ready-made answers to all things – their nature, existence, feature, et al; grounded on the preconceived notion that sciences have already provided sufficient answers to the primitiveness of these objects. While sciences are busy in explaining these things [the ready-made answers], they failed to realized that they were not able to arrived at the Isness of these objects, on how they come into being. However, since the sciences had already deceived the people, that in the past, it already provided sufficient answers to the primordial existence of things, it appears then they are seemingly contented and satisfied by what the sciences have achieved. This is what phenomenology wants to deconstruct – it wanted to create a paradigm shift by destroying the â€Å"tradition†institutionalized by science and overcoming relativism and subjectivism by the use of phenomenological reduction. From these, one can arrive at the pure data of consciousness. It is in this sense, that phenomenology becomes transcendental. Phenomenology is different from descriptive psychology because it draws upon pure reflection exclusively, and pure reflection excludes, as such, every type of external experience and therefore precludes any co positing of objects alien to consciousness. (Husserl, p. 7) Descriptive psychology then does not depend upon pure reflection exclusively; it needs psychological experiencing which would result to the reflection of the external experience. As such, consciousness itself becomes something transcendent, becomes an event in that spatial world which appears, by virtue of consciousness, to be transcendent. (Husserl, p. 7) It can be inferred then that phenomenology focuses solely on the consciousness per se of a being making it the science of consciousness while descriptive psychology focuses on the consciousness of a being in his psychic experiences. Transcendental idealism states that everything intuited in space and time, and therefore all objects of any experience possible to us, are nothing but appearances, that is, mere representations which, in the manner in which they are represented, as extended beings or as series of alterations, have no independent existence outside our thoughts. (Kant, p. 1) As such, it posits that one cannot have the knowledge of the realm beyond the empirical – that is one cannot experience objects outside space and time. It is because the mind as Kant argues having certain constraints [in reference to space and time] – can only grasp the noesis of the object but not its noumena – the object’s intentionality. It can be inferred then that transcendental idealism’s fundamental assertions lies on two grounds: first, objects by themselves exudes intentionality; and secondly, we can never know their intentionality [or noumena] because our mind can only grasp the noesis or what is appearing to us. Phenomenology believes on Kant’s first claim that indeed objects have their own intentionality but vies the second assertion. As such, its emergence as a domain of study in philosophy is grounded on its thrust to prove that indeed the mind can know the noumena of objects. Phenomenology believes that this can be done using eidetic reductionism proving to all that the mind can transcend beyond the physical realm – beyond space and time. Essentially, all the philosophies which were tackled in this paper seek to explain and interpret the world – including the objects within it and the beings living in it; from the primordial existence of things up to the authentication of one’s Being.
Tuesday, January 21, 2020
microwave oven :: essays research papers
It is late in the evening and you are â€Å"vegging out†in front of the TV. The program you are watching takes a commercial break. The commercial is advertising the most delicious-looking plate of Mexican food you have ever seen. You soon conclude that you have a craving for Mexican food. You realize that it is late and the only restaurant that serves Mexican food this late is Taco Bell (which is all the way across town). So what do you do? Well, I will tell you. You go to your fridge and grab a frozen burrito out of the freezer. Place the burrito on a paper plate and pop it in the microwave. â€Å"Cook for one and a half minutes on each side and let stand for a couple of minutes.†Vuala! Your hunger has been satisfied!      I have set up this scenario for you to show you how much the inventor of the microwave oven is unappreciated. This person is a genius. This invention is extremely convenient, portable, and easy to use.      First, I would like to mention how convenient this item is. Before the microwave, one would have to go through a series of strenuous step in or to cook a meal. First, you have to preheat the conventional oven (which takes approximately 15-20 minutes). Second, open the inferno door, making sure not to get too close or else you will burn your eyebrows and eyelashes off your face. Next, place the food item onto the racks of the abyss. After that, you have to wait 30-45 minutes until the food has cooked. (This whole time your house is becoming a sweltering netherworld.) You take the food out of the oven and sit down to eat (constantly wiping the sweat from your face). These vigorous steps were brilliantly eliminated due to the invention of the microwave oven. This machine causes no heat, no singed facial hair, and more importantly, takes about one-tenth the amount of time compared to the conventional oven.      Second, I would like to discuss this gadget’s portability. The college you have chosen to attend is several hours away from home. So, without Mom’s home-cooked meals you must rely on this appliance. It would be extremely difficult to stuff a conventional oven in your dorm room. Instead, the microwave oven sits compactly in the corner. You can take it anywhere. (Where there is electricity, that is.
Monday, January 13, 2020
Computer Hardware Essay
I. LECTURE OVERVIEW Foundation Concepts: Computer Hardware, reviews trends and developments in microcomputer, midrange, and mainframe computer systems; basic computer system concepts; and the major types of technologies used in peripheral devices for computer input, output, and storage. Computer Systems – Major types of computer systems are summarized in Figure 13.2. A computer is a system of information processing components that perform input, processing, output, storage, and control functions. Its hardware components include input and output devices, a central processing unit (CPU), and primary and secondary storage devices. The major functions and hardware in a computer system are summarized in Figure 13.9 Microcomputer Systems – Microcomputers are used as personal computers, network computers, personal digital assistants, technical workstations, and information appliances. Like most computer systems today, microcomputers are interconnected in a variety of telecommunications networks. This typically includes local area networks, client/server networks, intranets and extranets, and the Internet. Other Computer Systems – Midrange computers are increasingly used as powerful network servers, and for many multiuser business data processing and scientific applications. Mainframe computers are larger and more powerful than most midsize computers. They are usually faster, have more memory capacity, and can support more network users and peripheral devices. They are designed to handle the information processing needs of large organizations with high volumes of transaction processing, or with complex computational problems. Supercomputers are a special category of extremely powerfu l mainframe computer systems designed for massive computational assignments. II. LEARNING OBJECTIVES Learning Objective †¢ Identify the major types, trends, and uses of microcomputer, midrange and mainframe computer systems. †¢ Outline the major technologies and uses of computer peripherals for input, output, and storage. †¢ Identify and give examples of the components and functions of a computer system. †¢ Identify the computer systems and peripherals you would acquire or recommend for a business of your choice, and explain the reasons for your selections. III. LECTURE NOTES Section 1: Computer Systems: End User and Enterprise Computing INTRODUCTION All computers are systems of input, processing, output, storage, and control components. Technology is evolving at a rapid pace, and new forms of input, output, processing, and storage devices continue to enter the market. Analyzing City of Richmond and Tim Beaty Builders We can learn a lot about innovative business uses of PDAs from this case. Take a few minutes to read it, and we will discuss it (See City of Richmond and Tim Beaty Builders in Section IX). TYPES OF COMPUTER SYSTEMS -[Figure 13.2] There are several major categories of computer systems with a variety of characteristics and capabilities. Thus, computer systems are typically classified as: †¢ Mainframe computers †¢ Midrange computers †¢ Microcomputers These categories are attempts to describe the relative computing power provided by different computing platforms or types of computers therefore, they are not precise classifications. Some experts predict the merging or disappearance of several computer categories. They feel that many midrange and mainframe systems have been made obsolete by the power and versatility of client/server networks of microcomputers and servers. Most recently, some industry experts have predicted that the emergence of network computers and information appliances for applications on the Internet and corporate intranets will replace many personal computers, especially in large organisations and in the home computer market. MICROCOMPUTER SYSTEMS Microcomputers are the smallest but most important categories of computers systems for business people and consumers. They are also referred to as personal computers (or PCs). The computing power of current microcomputers exceeds that of the mainframe computers of previous generations at a fraction of their cost. They have become powerful-networked professional workstations for use by end users in business. Microcomputers categorised by size 1. Handheld 2. Notebook 3. Laptop 4. Portable 5. Desktop 6. Floor-standing Microcomputers categorised by use 1. Home 2. Personal 3. Professional 4. Workstation 5. Multi-user Systems Microcomputers categorised by special purpose 1. Workstation Computers 2. Network Servers 3. Personal Digital Assistants Workstation Computers – some microcomputers are powerful workstation computers (technical work stations) that support applications with heavy mathematical computing and graphics display demands such as computeraided design (CAD) in engineering, or investment and portfolio analysis in the securities industry. Network Servers – are usually more powerful microcomputers that co-ordinate telecommunications and resource sharing in small local area networks (LANs), and Internet and intranet websites. This is the fastest growing microcomputer application category. Network Computers: †¢ Network Computers (NCs) are a major new microcomputer category designed primarily for use with the Internet and corporate intranets by clerical workers, operational employees, and knowledge workers with specialised or limited computing applications. In-between NCs and full-featured PCs are stripped-down PCs known as NetPCs or legacy-free PCs. NetPCs are designed for the Internet and a limited range of applications within a company. Examples are: Dell’s Webpc, Compaq’s IPaq, HP’s e-PC, and eMachine’s eOne. Network computers (also called thin clients) are low-cost, sealed, networked microcomputers with no or minimal disk storage. Users of network computers depend primarily on Internet and intranet servers for their operating system and web browser, Java-enabled application software, and data access and storage. Main attractions of network computers over full-featured PCs are their low cost to: †¢ Purchase †¢ Upgrade †¢ Maintenance †¢ Support Other benefits to businesses include: †¢ Ease of software distribution and licensing †¢ Computing platform standardisation †¢ Reduced end user support requirements †¢ Improved manageability through centralised management and enterprisewide control of computer network resources. Information Appliances The market is offering a number of gadgets and information appliances that offer users the capability to perform enable host of basic computational chores. Examples of some information appliances include: †¢ Personal Digital Assistants – (PDAs) are designed for convenient mobile communications and computing. PDAs use touch screens, pen-based handwriting recognition, or keyboards to help mobile workers send and receive E-mail, access the Web, and exchange information such as appointments, to-do lists, and sales contacts with their desktop PCs or web servers. †¢ Set-top boxes and video-game consoles that connect to home TV sets. These devices enable you to surf the Web or send and receive E-mail and watch TV programs or play video games at the same time. †¢ Wireless PDAs and cellular and PCS phones and wired telephone-based appliances that can send and receive E-mail and access the Web. Computer Terminals Computer terminals are undergoing a major conversion to networked computer devices. For example: †¢ Dumb terminals are keyboard/video monitor devices with limited processing capabilities, to intelligent terminals, which are modified networked PCs, network computers or other microcomputer-powered network devices. Intelligent terminals can perform data entry and some information processing tasks independently. †¢ Networked terminals which may be Windows terminals that are dependent on network servers for Windows software, processing power, and storage, or Internet terminals, which depend on Internet or intranet website servers for their operating systems and application software. †¢ Transaction terminals are a form of intelligent terminal. Uses can be found in banks retail stores, factories, and other work sites. Examples are ATM’s, factory production recorders, and POS terminals. MIDRANGE COMPUTER SYSTEMS Midrange computers, including minicomputers and high-end network servers, are multi-user systems that can manage networks of PCs and terminals. Characteristics of midrange computers include: †¢ Generally, midrange computers are general-purpose computers that are larger and more powerful than most microcomputers but are smaller and less powerful than most large mainframes. †¢ Cost less to buy, operate, and maintain than mainframe computers. †¢ Have become popular as powerful network servers to help manage large Internet websites, corporate intranets and extranets, and client/server networks. †¢ Electronic commerce and other business uses of the Internet are popular high-end server applications, as are integrated enterprisewide manufacturing, distribution, and financial applications. †¢ Data warehouse management, data mining, and online analytical processing are contributing to the growth of high-end servers and other midrange systems. †¢ First became popular as minicomputers for scientific research, instrumentation systems, engineering analysis, and industrial process monitoring and control. Minicomputers could easily handle such uses because these applications are narrow in scope and do not demand the processing versatility of mainframe systems. †¢ Serve as industrial process-control and manufacturing plant computers and they play a major role in computeraided manufacturing (CAM). †¢ Take the form of powerful technical workstations for computer-aided design (CAD) and other computation and graphics-intensive applications. †¢ Are used as front-end computers to assist mainframe computers in telecommunications processing and network management. †¢ Can function in ordinary operating environments (do not need air conditioning or electrical wiring). †¢ Smaller models of minicomputers do not need a staff of specialists to operate them. MIDRANGE COMPUTER APPLICATIONS Serve as industrial process-control and manufacturing plant computers. Play a major role in computer-aided manufacturing (CAM). Serve as powerful technical workstations for computer-aided design (CAD) and other computation and graphics-intensive applications Serve as front-end computers to assist mainframe computers in telecommunications processing and network management. Midrange Computer as Network Server: †¢ Electronic commerce and other business uses of the Internet are popular high-end server applications, as are integrated enterprisewide manufacturing, distribution, and financial applications. †¢ Other applications, like data warehouse management, data mining, and online analytical processing are contributing to the growth of high-end servers and other midrange systems. †¢ Serve as powerful network servers to help manage large Internet web sites, corporate Intranets and extranets, and client/server networks MAINFRAME COMPUTER SYSTEMS Mainframe computers are large, fast, and powerful computer systems. Characteristics of mainframe computers include: †¢ They are physically larger and more powerful than micros and minis. †¢ Can process hundreds of millions of instructions per second (MIPS). †¢ Have large primary storage capacities. Main memory capacity can range from hundreds of megabytes to many gigabytes of primary storage. †¢ Mainframes have slimmed down drastically in the last few years, dramatically reducing air-conditioning needs, electronic power consumption, and floor space requirements, and thus their acquisition and operating costs. †¢ Sales of mainframes have increased due to cost reductions and the increase in applications such as data mining and warehousing, decision support, and electronic commerce. Mainframe Computer Applications: †¢ Handle the information processing needs of major corporations and government agencies with many employees and customers. †¢ Handle enormous and complex computational problems. †¢ Used in organisations processing great volumes of transactions. †¢ Handle great volumes of complex calculations involved in scientific and engineering analyses and simulations of complex design projects. †¢ Serve as superservers for the large client/server networks and high-volume Internet web sites of large companies. †¢ Are becoming a popular business-computing platform for data mining and warehousing, and electronic commerce applications. Supercomputer Systems: The term supercomputer describes a category of extremely powerful computer systems specifically designed for scientific, engineering, and business applications requiring extremely high-speeds for massive numeric computations. Supercomputer Applications: †¢ Used by government research agencies, large universities, and major corporations. †¢ Are used for applications such as global weather forecasting, military defence systems, computational cosmology and astronomy, microprocessor research and design, large scale data mining, large time-sharing networks, and so on. †¢ Use parallel processing architectures of interconnected microprocessors (which can execute many instructions at the same time in parallel). †¢ Can perform arithmetic calculations at speeds of billions of floating-point operations per second (gigaflops). Teraflop (1 trillion floating-point operations per second) supercomputers, which use advanced massively parallel processing (MPP) designs of thousands of interconnected microprocessors, are becoming available. †¢ Purchase price for large supercomputers are in the $5 million to $50 million range. Mini-supercomputers: The use of symmetric multiprocessing (SMP) and distributed shared memory (DSM) designs of smaller numbers of interconnected microprocessors has spawned a breed of mini-supercomputer with prices that start in the hundreds of thousands of dollars. TECHNICAL NOTE: THE COMPUTER SYSTEM CONCEPTS – [Figure 13.9] As a business professional, you do not need a detailed technical knowledge of computers. However, you do need to understand some basic facts and concepts about computer systems. This should help you be an informed and productive user of computer system resources. A computer is a system, an interrelated combination of components that perform the basic system functions of input, processing, output, storage, and control, thus providing end users with a powerful information-processing tool. Understanding the computer as a computer system is vital to the effective use and management of computers. A computer is a system of hardware devices organised according to the following system functions: †¢ Input. Examples of some input devices of a computer system include: 1. Keyboards 2. Touch Screens3. Light Pens 4. Electronic Mice 4. Optical Scanners 5. Voice Input They convert data into electronic machine-readable form for direct entry or through a telecommunications network into a computer system. Processing. The central processing unit (CPU) is the main processing component of a computer system. (In microcomputers, it is the main microprocessor). One of the CPU’s major components is the arithmetic-logic unit (ALU) that performs the arithmetic and logic functions required in computer processing. Components of the CPU include: 1. Control Unit 2. Arithmetic-Logic Unit 3. Primary Storage Unit Output. Convert electronic information produced by the computer system into human-intelligible form for presentation to end-users. Examples of output devices include: 1. Video Display Units 2. Audio Response Units 3. Printers Storage. The storage function of a computer system is used to store data and program instructions needed for processing. Storage devices include: 1. Primary Storage Unit (main memory) 2. Secondary Storage Devices (magnetic disk and tape units, optical disks) Control. The control unit of a CPU interprets computer program instructions and transmits directions to the other components of the computer system. Computer Processing Speeds: Operating speeds of computers are measured in a number of ways. For example: †¢ Milliseconds – Thousands of a second. Microseconds – Millionths of a second. Nanoseconds – Billionth of a second Picosecond – Trillionth of a second Other terminology used includes: Teraflop – used by some supercomputers MIPS – Million instructions per second Megahertz (MHz) – Millions of cycles per second Gigahertz (GHz) – Billions of cycles per second Clock Speed – used to rate microprocessors by the speed of their timing circuits and internal clock. Section II: Computer Peripherals: Input, Output, and Storage Technologies INTRODUCTION A computer is just a high-powered â€Å"processing box†without peripherals. Your personal computing needs will dictate the components you choose for our particular computing needs. Analyzing United Technologies and Eastman Kodak We can learn a lot about the business value of consolidating computer operations and systems from this case. Take a few minutes to read it, and we will discuss it (See United Technologies and Eastman Kodak in Section IX). PERIPHERALS Peripherals are the generic name for all input, output, and secondary storage devices that are part of a computer system. Peripherals depend on direct connections or telecommunications links to the central processing unit of a computer system. Thus, all peripherals are online devices, that is, separate from, but can be electronically connected to and controlled by, a CPU. This is the opposite of off-line devices, which are separate from and not under the control of the CPU. INPUT TECHNOLOGY There has been a major trend toward the increased use of input technologies that provide a more natural user interface for computer users. More and more data and commands are being entered directly and easily into computer systems through pointing devices like electronic mice and touch pads, and technologies like optical scanning, handwriting recognition, and voice recognition. POINTING DEVICES Keyboards are still the most widely used devices for entering data and text into computer systems. However, pointing devices are a better alternative for issuing commands, making choices, and responding to prompts displayed on your video screen. They work with your operating system’s graphical user interface (GUI), which presents you with icons, menus, windows, buttons, bars, and so on, for your selection. Examples of pointing devices include: †¢ Electronic Mouse – A device used to move the cursor on the screen, as well as to issue commands and make icon and menu selections. †¢ Trackball – A device used to move the cursor on the display screen. Pointing Stick – A small buttonlike device, sometimes likened to the eraser head of a pencil. The cursor moves in the direction of the pressure you place on the track point. Touchpad – A small rectangular touch-sensitive surface usually placed below the keyboard. The cursor moves in the direction your finger moves on the pad. Touch Screens – A device that accepts data input by the placement of a finger on or close to the CRT screen. PEN-BASED COMPUTING Pen-based computing technologies are being used in many hand-held computers and personal digital assistants. These small PCs and PDAs contain fast processors and software that recognises and digitises handwriting, hand printing, and hand drawing. They have a pressure-sensitive layer like a graphics pad under their slatelike liquid crystal display (LCD) screen. A variety of penlike devices are available: Digitizer Pen – A photoelectronic device that can be used as a pointing device, or used to draw or write on a pressure-sensitive surface of a graphics tablet. Graphics Tablet – A device that allows an end user to draw or write on a pressure-sensitive tablet and has their handwriting or graphics digitised by the computer and accepted as input. SPEECH RECOGNITION SYSTEMS Speech recognition and voice response (in their infancy) promise to be the easiest method of data entry, word processing, and conversational computing, since speech is the easiest, most natural means of human communication. Speech recognition systems analyse and classify speech or vocal tract patterns and convert them into digital codes for entry into a computer system. Early voice recognition products used discrete speech recognition, where you had to pause between each spoken word. New continuous speech recognition (CSR) software recognises controlled, conversationally paced speech. Examples of continuous speech recognition software include: †¢ NaturallySpeaking by Dragon Systems †¢ ViaVoice by IBM †¢ VoiceXpress by Lernout & Hauspie †¢ FreeSpeech by Philips Areas where speech recognition systems are used include: †¢ Manufacturers use it for inspection, inventory, and quality control †¢ Airlines and parcel delivery companies use it for voice-directed sorting of baggage and parcels †¢ Voice activated GPS systems are being used in advanced car design †¢ Physicians use it to enter and printout prescriptions †¢ Gemmologists use it to free up their hands when inspecting and grading precious stones †¢ Handicapped individuals use voice-enabled software to operate their computers, e-mail, and surf the World Wide Web. Speaker-independent voice recognition systems allow a computer to understand a few words from a voice it has never heard before. They enable computers to respond to verbal and touch-tone input over the telephone. Examples include: †¢ Computerized telephone call switching †¢ Telemarketing surveys †¢ Bank pay-by-phone bill-paying services †¢ Stock quotations services †¢ University registration systems †¢ Customer credit and account balance inquiries OPTICAL SCANNING Optical scanning devices read text or graphics and convert them into digital input for a computer. Optical scanning enables the direct entry of data from source documents into a computer system. Popular uses of optical scanning include: †¢ Scanning pages of text and graphics into your computer for desktop publishing and web publishing applications. †¢ Scan documents into your system and organize them into folders as part of a document management library system for easy reference or retrieval. There are many types of optical scanners, but they all employ photoelectric devices to scan the characters being read. Reflected light patterns of the data are converted into electronic impulses that are then accepted as input into the computer system. Optical scanning technology known as optical character recognition (OCR) can read special-purpose characters and codes. OCR scanners are used to read characters and codes on:  Merchandise tags Product labels Credit card receipts Utility bills Insurance premiums Airline tickets Sort mail Score tests Process business and government forms Devices such as handheld optical scanning wands are used to read OCR coding on merchandise tags and other media. Many business applications involve reading bar code, a code that utilises bars to represent characters. One common example is the Universal Produce Code (UPC) bar coding that you see on packages of food items and many other products. OTHER INPUT TECHNOLOGIES Magnetic stripe technology is a familiar form of data entry that helps computers read credit cards. The dark magnetic stripe on the back of such cards is the same iron oxide coating as on magnetic tape. Smart cards that embed a microprocessor chip and several kilobytes of memory into debit, credit, and other cards are popular in Europe, and becoming available in the United States. Digital cameras and digital video cameras enable you to shoot, store, and download still photos or full motion video with audio into your PC. Magnetic ink character recognition (MICR) is machine recognition of characters printed with magnetic ink. Primarily used for check processing by the banking industry. OUTPUT TECHNOLOGIES Computers provide information in a variety of forms. Video displays and printed documents have been, and still are, the most common forms of output from computer systems. But other natural and attractive output technologies such as voice response systems and multimedia output are increasingly found along with video displays in business applications. VIDEO OUTPUT Video displays are the most common type of computer output. Most desktop computers rely on video monitors that use cathode ray tube (CRT) technology. Usually, the clarity of the video display depends on the type of video monitor you use and the graphics circuit board installed in your computer. A high-resolution, flicker-free monitor is especially important if you spend a lot of time viewing multimedia on CDs or the Web, or complex graphical displays of many software packages. The biggest use of liquid crystal displays (LCDs) is to provide a visual display capability for portable microcomputers and PDAs. LCD displays need significantly less electric current and provide a thin, flat display. Advances in technology such as active matrix and dual scan capabilities have improved the color and clarity of LCD displays. PRINTED OUTPUT After video displays, printed output is the most common form of output displays. Most personal computer systems rely on inkjet or laser printers to produce permanent (hard copy) output in high-quality printed form. Printed output is still a common form of business communications, and is frequently required for legal documentation. †¢ Inkjet printers – Spray ink onto a page one line at a time. They are popular, low-cost printers for microcomputer systems. They are quiet, produce several pages per minute of high-quality output, and can print both black-and-white and high-quality colour graphics. Laser Printers – Use an electrostatic process similar to a photocopying machine to produce many pages per minute of high-quality black-and-white output. More expensive colour laser printers and multifunction inkjet and laser models that print, fax, scan, and copy are other popular choices for business offices. STORAGE TRADE-OFFS Data and information need to be stored after input, during processing, and before output. Computer-based information systems rely primarily on the memory circuits and secondary storage devices of computer systems to accomplish the storage function. Major trends in primary and secondary storage methods: †¢ Progress in very-large scale integration (VLSI), which packs millions of memory circuit elements on tiny semiconductor memory chips, are responsible for continuing increases in the main-memory capacity of computers. †¢ Secondary storage capacities are also expected to escalate into the billions and trillions of characters, due primarily to the use of optical media. Storage Trade-offs: Speed, capacity, and cost relationships. †¢ Note the cost/speed/capacity trade-offs as one moves from semiconductor memories to magnetic media, such as magnetic disks and tapes, to optical disks. †¢ High-speed storage media cost more per byte and provide lower capacities. †¢ Large capacity storage media cost less per byte but are slower †¢ Semiconductor memories are used mainly for primary storage, though they are sometimes used as high-speed secondary storage devices. †¢ Magnetic disk and tape and optical disk devices are used as secondary storage devices to greatly enlarge the storage capacity of computer systems. †¢ Most primary storage circuits use RAM (random access memory) chips, which lose their contents when electrical power is interrupted †¢ Secondary storage devices provide a more permanent type of storage media for storage of data and programs. Computer Storage Fundamentals: [Figure 13.20] Data is processed and stored in a computer system through the presence or absence of electronic or magnetic signals in the computer’s circuitry in the media it uses. This is called a â€Å"two-state†or binary representation of data, since the computer and media can exhibit only two possible states or conditions – ON (1) or OFF (0). Computer storage elements: †¢ Bit – is the smallest element of data, (binary digit) which can have a value of zero or one. The capacity of memory chips is usually expressed in terms of bits. Byte – is the basic grouping of bits that the computer operates as a single unit. It typically consists of 8 bits and is used to represent one character of data in most computer coding schemes (e.g. 8 bits = 1 byte). The capacity of a computer’s memory and secondary storage devices is usually expressed in terms of bytes. ASCII (American Standard Code for Information Interchange) EBCDIC (Extended Binary Coded Decimal Interchange Code) Pronounced: EB SEE DICK Storage capacities are frequently measured in: Kilobyte = 1,000 bytes Megabyte = 1,000,000 bytes Gigabyte = 1,000,000,000 bytes Terabyte = 1,000,000,000,000 bytes Petabyte = 1,000,000,000,000,000 bytes Exabyte = 1,000,000,000,000,000,000 bytes Zettabyte = 1,000,000,000,000,000,000,000 bytes Yottabyte = 1,000,000,000,000,000,000,000,000 bytes Direct and Sequential Access †¢ Direct Access – Primary storage media such as semiconductor memory chips are called direct access or random access memories (RAM). Magnetic disk devices are frequently called direct access storage devices (DASDs). The terms direct access and random access describe the same concept. They mean that an element of data or instructions can be directly stored and retrieved by selecting and using any of the locations on the storage media. They also mean that each storage position (1) has a unique address and (2) can be individually accessed in approximately the same length of time without having to search through other storage positions. Sequential Access – sequential access storage media such as magnetic tape do not have unique storage addresses that can be directly addressed. Instead, data must be stored and retrieved using a sequential or serial process. Data are recorded one after another in a predetermined sequence on a storage medium. Locating an individual item of data requires searching much of the recorded data on the tape until the desired item is located. SEMICONDUCTOR MEMORY The primary storage (main memory) on most modern computers consists of microelectronic semiconductor memory circuits. Plug-in memory circuit boards containing 32 megabytes or more of memory chips can be added to your PC to increase its memory capacity. Specialized memory can help improve your computer’s performance. Examples include: †¢ External cache memory of 512 kilobytes to help your microprocessor work faster †¢ Video graphics accelerator cards with 16 megabytes of RAM are used for faster and clearer video performance †¢ Removable credit-card-size and smaller â€Å"flash memory†RAM cards provide several megabytes of erasable direct access storage for PDAs or hand-held PCs. Some of the major attractions of semiconductor memory are: †¢ Small size †¢ Fast speed †¢ Shock and temperature resistance One major disadvantage of most semiconductor memory is: †¢ Volatility – Uninterrupted electric power must be supplied or the contents of memory will be lost (except with read only memory, which is permanent). There are two basic types of semiconductor memory: †¢ Random Access Memory (RAM) – these memory chips are the most widely used primary storage medium. Each memory position can be both read and written, so it is also called read/write memory. This is a volatile memory. †¢Ã‚ Read Only Memory (ROM) – Non-volatile random access memory chips are used for permanent storage. ROM can be read but not erased or overwritten. Instructions and programs in primary storage can be permanently â€Å"burned in† to the storage cells during manufacturing. This permanent software is also called firmware. Variations include PROM (programmable read only memory) and EPROM (erasable programmable read only memory), which can be permanently or temporarily programmed after manufacture. MAGNETIC DISK STORAGE These are the most common forms of secondary storage for modern computer systems. That’s because they provide fast access and high storage capacities at a reasonable cost. Characteristics of magnetic disks: †¢ Disk drives contain metal disks that are coated on both sides with an iron oxide recording material. †¢ Several disks are mounted together on a vertical shaft, which typically rotates the disks are speeds of 3,600 to 7,600 revolutions per minute (rpm) †¢ Access arms between the slightly separated disks to read and write data on concentric, circular tracks position electromagnetic read/write heads. †¢ Data are recorded on tracks in the form of tiny magnetized spots to form the binary digits of common computer codes. †¢ Thousands of bytes can be recorded on each track, and there are several hundred data tracks on each disk surface, which provides you with billions of storage positions for software and data. Types of Magnetic Disks There are several types of magnetic disk arrangements, including disk cartridges as well as fixed disk units. Removable disk devices are popular because they are transportable and can be used to store backup copies of your data off-line for convenience and security. Floppy Disks, or magnetic disks, consist of polyester film disks covered with an iron oxide compound. A single disk is mounted and rotates freely inside a protective flexible or hard plastic jacket, which has access openings to accommodate the read/write head of a disk drive unit. The 3-1/2-inch floppy disk, with capacities of 1.44 megabytes, is the most widely used version, with a newer Superdisk technology offering 120 megabytes of storage. Hard Disk Drives combine magnetic disks, access arms, and read/write heads into a sealed module. This allows higher speeds, greater data-recording densities, and closer tolerances within a sealed, more stable environment. Fixed or removable disk cartridge versions are available. Capacities of hard drives range from several hundred megabytes to many gigabytes of storage. RAID Storage Disk arrays of interconnected microcomputer hard disk drives have replaced large-capacity mainframe disk drives to provide many gigabytes of online storage. Known as RAID (redundant arrays of independent disks), they combine from 6 to more than 100 small hard disk drives and their control microprocessors into a single unit. Advantages of RAID disks include: †¢ Provide large capacities with high access speeds since data is accessed in parallel over multiple paths from many disks. †¢ Provide fault tolerant capability, since their redundant design offers multiple copies of data on several disks. If one disk fails, data can be recovered from backup copies automatically stored on other disks. †¢ Storage area networks (SANs) are high-speed fibre channel local area networks that can interconnect many RAID units and share their combined capacity through network servers for many users. MAGNETIC TAPE STORAGE Magnetic Tape is still being used as a secondary storage medium in business applications. The read/write heads of magnetic tape drives record data in the form of magnetised spots on the iron oxide coating of the plastic tape. Magnetic tape devices include tape reels and cartridges in mainframes and midrange systems, and small cassettes or cartridges for PCs. These devices serve as slower, but lower cost, storage to supplement magnetic disks to meet massive data warehouse and other business storage requirements. Other major applications for magnetic tape include long-term archival storage and backup storage for PCs and other systems. OPTICAL DISK STORAGE Optical disk storage involves technology, which is based on using a laser to read tiny spots on a plastic disk. The disks are currently capable of storing billions of characters of information. †¢Ã‚ CD-ROM – A common type of optical disk used on microcomputers. They are used for read only storage. Storage is over 600 megabytes per disk. This is equivalent to over 400 1.44-megabyte floppy disks or 300,000 double-spaced pages of text. Data are recorded as microscopic pits in a spiral track, and are read using a laser device. Limitation: Recorded data cannot be erased †¢Ã‚ CD-R – (Compact disk recordable) is another optical disk technology. It enables computers with CD-R disk drive units to record their own data once on a CD, and then be able to read the data indefinitely. Limitation: Recorded data cannot be erased †¢Ã‚ CD-RW – (CD-rewritable) optical disk systems have now become available which record and erase data by using a laser to heat a microscopic point on the disk’s surface. In CD-RW versions using magneto-optical technology, a magnetic coil changes the spot’s reflective properties from one direction to another, thus recording a binary one to zero. A laser device can then read the binary codes on the disk by sensing the direction of reflected light. †¢Ã‚ DVD – (Digital Video Disk or Digital Versatile Disk) can hold from 3.0 to 8.5 gigabytes of multimedia data on each side of a compact disk. The large capacities and high- quality images and sound of DVD technology are expected to eventually replace CD-ROM and CD-RW technologies for data storage, and promise to accelerate the use of DVD drives for multimedia products that can be used in both computers and home entertainment systems. †¢ DVD-ROM is beginning to replace magnetic tape videocassettes for movies and other multimedia products. †¢ DVD – RAM is being used for backup and archival storage data and multimedia files. Business Applications One of the major uses of optical disks in mainframe and midrange systems is in image processing, where longterm archival storage of historical files of document images must be maintained. Mainframe and midrange computer versions of optical disks use 12-inch plastic disks with capacities of several gigabytes, with up to 20 disks held in jukebox drive units. WORM – (Write Once, Read Many) versions of optical disks are used to store data on the disk. Although data can only be stored once, it can be read an infinite number of times. One of the major business uses of CD-ROM disks for personal computers is to provide a publishing medium for fast access to reference materials in a convenient, compact form. These include: †¢ Catalogs †¢ Directories †¢ Manuals †¢Ã‚ Periodical abstracts †¢Ã‚ Part listings †¢Ã‚ Statistical databases of business activity and economic activity Interactive multimedia applications in business, education, and entertainment using CD-ROM and DVD disks. Optical disks have become a popular storage medium for image processing and multimedia business applications and they appear to be a promising alternative to magnetic disks and tape for very large mass storage capabilities for enterprise computing systems. However, rewritable optical technologies are still being perfected. Also, most optical disk devices are significantly slower and more expensive (per byte of storage) than magnetic disk devices. So optical disk systems are not expected to displace magnetic disk technology in the near future for most business applications. IV. KEY TERMS AND CONCEPTS – DEFINED Binary Representation: Pertaining to the presence or absence of electronic or magnetic â€Å"signals†in the computer’s circuitry or in the media it uses. There are only two possible states or conditions – presence or absence. Central Processing Unit (CPU): The unit of a computer system that includes the circuits that controls the interpretation and execution of instructions. In many computer systems, the CPU includes the arithmetic-logic unit, the control unit, and primary storage unit. Computer System: Computer hardware as a system of input, processing, output, storage, and control components. Thus a computer system consists of input and output devices, primary and secondary storage devices, the central processing unit, the control unit within the CPU, and other peripheral devices. Computer Terminal: Any input/output device connected by telecommunications links to a computer. Digital Cameras: Digital still cameras and digital video cameras enable you to shoot, store, and download still photos or full-motion video with audio in your PC. Direct Access: A method of storage where each storage position has a unique address and can be individually accessed in approximately the same period of time without having to search through other storage positions. Information Appliance: Devices for consumers to access the Internet. Laptop Computer: A small portable PC. Liquid Crystal Displays (LCD): Electronic visual displays that form characters by applying an electrical charge to selected silicon crystals. Magnetic Disk Storage: Data storage technology that uses magnetised spots on metal or plastic disks. Magnetic Disk Storage – Floppy Disk: Small phonograph record enclosed in a protective envelope. It is a widely used form of magnetic disk media that provides a direct access storage capability for microcomputer systems. Magnetic Disk Storage – Hard Disk Secondary storage medium; generally nonremovable disks made out of metal and covered with a magnetic recording surface. It holds data in the form of magnetised spots. Magnetic Disk Storage – RAID Redundant array of independent disks. Magnetic disk units that house many interconnected microcomputer hard disk drives, thus providing large, fault tolerant storage capacities. Magnetic Ink Character Recognition (MICR): The machine recognition of characters printed with magnetic ink. Primarily used for check processing by the banking industry. Magnetic Stripe: A magnetic stripe card is a plastic wallet-size card with a strip of magnetic tape on one surface; widely used for credit/debit cards. Magnetic Tape: A plastic tape with a magnetic surface on which data can be stored by selective magnetisation of portions of the surface. Mainframe Computer: A larger-size computer system, typically with a separate central processing unit, as distinguished from microcomputer and minicomputer systems. Microcomputer: A very small computer, ranging in size from a â€Å"Computer on a chip†to a small typewriter-size unit. Microprocessor: A semiconductor chip with circuitry for processing data. Midrange Computer: Larger and more powerful than most microcomputers but are smaller and less powerful than most large mainframe computer systems. Minicomputer: A small electronic general-purpose computer. Network Computer: A new category of microcomputer designed mainly for use with the Internet and Intranets on tasks requiring limited or specialised applications and no or minimal disk storage. Network Server: A type of midrange computer used to co-ordinate telecommunications and resource sharing and manages large web sites, Intranets, extranets, and client/server networks. Network Terminal: A terminal that depends on network servers for its software and processing power. Off-line: Pertaining to equipment or devices not under control of the central processing unit. Online: Pertaining to equipment or devices under control of the central processing unit. Optical Character Recognition (OCR): The machine identification of printed characters through the use of light-sensitive devices. Optical Disk Storage: Technology based on using a laser to read tiny spots on a plastic disk. The disks are currently capable of storing billions of characters of information. Optical Disk Storage – CD-ROM: An optical disk technology for microcomputers featuring compact disks with a storage capacity of over 500 megabytes. Optical Disk Storage – CD-R: Compact disk recordable (CD-R) enables computers with CD-R disk drive units to record their own data once on a CD, than be able to read the data indefinitely. Optical Disk Storage – CD-RW: Compact disk rewritable (CD-RW) enables computers with CD-RW disk drive units to record and erase data by using a laser to heat a microscopic point on the disk’s surface. Optical Disk Storage – DVD: Digital video disk or digital versatile disk (DVD) enables computers with DVD disk drive units to hold from 3.0 to 8.5 gigabytes of multimedia data on each side of a compact disk. Optical Disk Storage – WORM Disk: Optical disk that allows users to write once, read many times. Optical Scanning: Using a device (scanner) that scans characters or images and generates their digital representations. Pen-Based Computing: Tablet-style microcomputers that recognise hand-writing and hand-drawing done by a pen-shaped device on their pressure sensitive display screens. Peripheral Devices: In a computer system, any unit of equipment, distinct from the central processing unit, that provides the system with input, output, or storage capabilities. Personal Digital Assistant: Handheld microcomputer devices, which are designed for convenient mobile communications and computing. Pointing Devices: Devices, which allow end users to issue commands or make choices by moving a cursor on the display, screen. Pointing Device – Electronic Mouse: A small device that is electronically connected to a computer and is moved by hand on a flat surface in order to move the cursor on a video screen in the same direction. Buttons on the mouse allow users to issue commands and make responses or selections. Pointing Device – Pointing Stick: A small buttonlike device sometimes likened to the eraser head of a pencil. The cursor moves in the direction of the pressure you place on the track point. Pointing Device – Touchpad: Is a small rectangular touch-sensitive surface usually placed below the keyboard. The cursor moves in the direction your finger moves on the pad. Pointing Device – Trackball: A roller device set in a case used to move the cursor on a computer’s display screen. Primary Storage: The main (or internal) memory of a computer. Usually in the form of semiconductor storage. Printers: Devices that produce hard copy output such as paper documents or reports. Secondary Storage: External or auxiliary storage device that supplements the primary storage of a computer. Semiconductor Memory: Microelectronic storage circuitry etched on tiny chips of silicon or other semiconducting material. Semiconductor Memory – RAM: Also known as main memory or primary storage; type of memory that temporarily holds data and instructions needed shortly by the CPU. RAM is a volatile type of storage. Semiconductor Memory – ROM: Also known as firmware; a memory chip that permanently stores instructions and data that are programmed during the chip’s manufacture. Three variations on the ROM chip are PROM, EPROM, and EEPROM. ROM is a nonvolatile form of storage. Sequential Access: A sequential method of storing and retrieving data from a file. Smart Cards: Cards such as debit and credit cards, which have an embedded microprocessor chip and several kilobytes of memory. Speech Recognition: Direct conversion of spoken data into electronic form suitable for entry into a computer system. Promises to be the easiest, most natural way to communicate with computers. Storage Capacity Elements: Units used for storage capacity and data: bits, bytes, kilobytes (KB), megabytes (MB), gigabytes (GB), terabytes (TB). Storage Capacity Elements – Bit: A contraction of â€Å"binary digit†. It can have the value of either 0 or 1. Storage Capacity Elements – Byte: A sequence of adjacent binary digits operated on as a unit and usually shorter than a computer word. In many computer systems, a byte is a grouping of eight bits that can represent one alphabetic or special character or can be â€Å"packed†with two decimal digits. Storage Capacity Elements – Kilobyte (K or KB): When referring to computer storage capacity it is equivalent to 2 to the 10th power, or 1,014 in decimal notation. Storage Capacity Elements – Megabyte (MB): One million bytes. More accurately, 2 to the 20th power, 1,048,576 in decimal notation. Storage Capacity Elements – Gigabyte (GB): One billion bytes. More accurately, 2 to the 30th power, or 1,073,741,824 in decimal notation. Storage Capacity Elements – Terabyte (TB): One trillion bytes. More accurately, 2 to the 40th power, or 1,009,511,627,776 in decimal notation. Storage Media Trade-offs: The trade-offs in cost, speed, and capacity of various storage media. Supercomputer: A special category of large computer systems that are the most powerful available. They are designed to solve massive computational problems. Time Elements: Units used for measuring processing speeds: milliseconds, microseconds, nanoseconds, and picoseconds. Time Elements – Millisecond: A thousandth of a second. Time Elements – Microsecond: A millionth of a second. Time Elements – Nanosecond: One billionth of a second. Time Elements – Picosecond: One trillionth of a second. Touch-Sensitive Screen: An input device that accepts data input by the placement of a finger on or close to the CRT screen. Transaction Terminals: Terminals used in banks, retail stores, factories, and other work sites that are used to capture transaction data at their point of origin. Examples are point-of-sale (POS) terminals and automated teller machines (ATMs). Video Output: Video displays are the most common type of computer output. Volatility: Memory (such as electronic semiconductor memory) that loses its contents when electrical power is interrupted. Wand: A handheld optical character recognition device used for data entry by many transaction terminals. Workstation: A computer terminal or micro- or minicomputer system designed to support the work of one person. Also, a highpowered computer to support the work of professionals in engineering, science, and other areas that require extensive computing power and graphics capabilities. V. DISCUSSION QUESTIONS Do you agree with the statement: â€Å"The network is the computer†?  What trends are occurring in the development and use of the major types of computer systems? Do you think that network computers (NCs) will replace personal computers (PCs) in business applications? Are networks of PCs and servers making mainframe computers obsolete?  What trends are occurring in the development and use of peripheral devices? Why are those trends occurring? When would you recommend the use of each of the following:  Network computers NetPCs Network terminals Information appliances in business applications What processor, memory, magnetic disk storage, and video display capabilities would you require for a personal computer that you would use for business purposes?  What other peripheral devices and capabilities would you want to have for your business PC?
Subscribe to:
Posts (Atom)