Saturday, August 31, 2019

Health Campaigns to Use to Explain Models of Behaviour Change Essay

In this report it will investigate at least three recent health education campaigns and use them to explain two models of behaviour change. The three recent health education campaigns will be ‘Smoke Free’, ‘Change4Life’ and ‘FRANK’. The two models of behaviour change will be the theory of reasoned action and the stages of change model. For a health educator to carry out their role effectively, they should understand the complicated processes which may influence an individual to change their behaviour. This theory gives an outline that looks at the attitudes which strengthen behaviours. It suggests that the most significant cause of an individual’s behaviour is behaviour intent. Behaviour intent is the person’s intention to carry out a behaviour and this depends on their attitude and the subjective norm. The subjective norm is the influences of individuals in somebody’s social environment on their intention to perform the specific behaviour. If an individual believes that the outcome of taking on a behaviour will be positive, they will have a positive attitude towards the particular behaviour. If other individuals who are important to that person also believe that this behaviour change is positive, then a positive subjective norm is formed. By having a combination of both the individual believing the outcome of adopting the behaviour will be positive and other individuals believing that the behaviour change is positive, it will be much more likely that the person will follow the health advice. The stages of change model The stages of change model says that the process of behaviour change can be broken down into five stages. The five stages are pre-contemplation, contemplation, preparation, action and maintenance. Pre-contemplation is when there is no intention to change behaviour in the near future. At this stage individuals are not aware at all or not aware enough of their problems. Contemplation is when individuals are aware that a problem is there and are seriously considering overcoming their problem but they have not yet made commitment to do something about it. At the stage of preparation individuals are intending to do something about it very soon, however have not done anything about it recently. At the stage of action individuals make changes to their behaviour, experiences or environment so that they can overcome their problems. This needs a lot of commitment of time and energy. Maintenance is the stage when individuals work to try and stop relapse and establish what they have gained during action. The model is often shown as a wheel and some individuals may have to go through the process many times to be successful in departing the cycle and attaining a steady and maintained changed behaviour. The following picture shows the wheel of the stages of behaviour change: Smoke Free The following hyperlink is to an online version of the Smoke Free health education campaign: http://smokefree. nhs. uk/advice-and-information/behind-the-campaign/ The Smoke Free health education campaign uses the theory of reasoned action model of behaviour change. The campaign aims to encourage individuals who smoke to quit smoking. If an individual who smokes views the Smoke Free campaign they may realise that if they quit smoking it will have a positive effect on their health and prevent causing serious harm. If they do believe that the outcome of following the health advice provided by the Smoke Free campaign will be positive, for example it will reduce their risk of developing illness, disability or death caused by cancer, heart or lung disease, it will reduce their risk of gangrene or amputation caused by circulatory problems, it will improve fertility levels and it will improve their breathing and fitness etc. They will therefore have a positive attitude towards the behaviour of stopping smoking. Other people who are important to the individual who is considering quitting smoking may also view the campaign and believe the outcome of the person stopping smoking will be positive, for example it will protect the health of those around the individual by not exposing them to second-hand smoke. By the person having a positive attitude and the positive subjective norm, the person will be much more likely to follow the health advice given by the Smoke Free campaign and quit smoking. The Smoke Free campaign also uses the stages of change model. When an individual is trying to quit smoking they will go through the stages of change cycle. At the stage of pre-contemplation the individual who smokes does not have intention to change their behaviour, they may not be aware or not aware enough of the damage that smoking can cause to their body and their smoking problem. At the stage of contemplation the individual may start becoming aware that they have a problem with their smoking and they are seriously considering stopping smoking but they have not yet made commitment to do something about trying to quit smoking. At the preparation stage they are intending to do something about trying to stop smoking very soon, but they have not done anything yet. At the action stage the individual makes changes to their behaviour, so that they can overcome their smoking problem, for example completely stopping smoking, gradually cutting down on smoking, using nicotine replacement therapies such as nicotine patches, nicotine gum, inhalators etc. At the maintenance stage the individual will work to try and stop starting smoking again and they look at what they have gained during the action of changing their behaviour by quitting smoking, such as their health improving. The individual may not be successful with stopping smoking on this occasion, they may relapse and start smoking again, so they might have to go through the process many times before they completely stop smoking. Change4Life The link below is to an online version of the Change4Life health education campaign: http://www. hs. uk/Change4Life/Pages/change-for-life. aspx An individual may follow the advice that the Change4Life health education campaign provides, if they are overweight. The campaign tries to encourage individuals to become more active, eat healthier, drink less alcohol etc. to prevent individuals from becoming seriously overweight, which can increase individuals chances of getting heart disease, type 2 diabetes mellitus and some cancers etc. The Chang e4Life health education campaign uses the theory of reasoned action model of behaviour change. If an individual who is may be overweight sees the Change4Life campaign they may realise the harm that being overweight can cause and they might realise that if they follow the advice of Change4Life it might have positive effects. If the person does believe that by following the advice given by the Change4Life campaign it will result in positive outcomes, for example their weight reduces, their fitness levels improve, the chances of them developing conditions like heart disease, type 2 diabetes mellitus and cancers decrease etc. they may have a positive attitude towards the behaviour of losing weight. Other individuals who are important to the person who is considering losing weight might also see the campaign and believe it will result in positive outcomes for the person trying to lose weight. By the individual who wants to lose weight having a positive attitude and the individuals who are important to them also being positive, the likelihood of the person following the health advice provided by the Change4Life campaign and losing weight will be higher. The Change4Life health education campaign also applies the stages of change model. When a person is trying to lose weight they may go through the stages of change cycle. To begin with the person may not have any intention to change their behaviour because they might not be aware at all or completely aware of the harm that being overweight can cause and their weight problem, which is the pre-contemplation stage. They might start becoming aware that they do have a weight problem and they are seriously considering losing weight, however they have not yet made commitment to do something about trying to lose weight, which is the contemplation stage. At the stage of preparation the person is intending to do something about losing weight, but they have not done anything recently. At the action stage the person makes changes to their behaviour, so that they can overcome their weight problem, such as increasing their exercise levels, changing their diet to make it healthier, reducing their alcohol intake etc. At the stage of maintenance the person will work to try and prevent themselves putting weight back on and they look back at what they have attained during the action of changing their behaviour by losing weight. Below is a hyperlink to an online version of the FRANK health education campaign: http://www. talktofrank. com/ FRANK supports individuals who have a drug addiction, to help them overcome their problems. The FRANK health education campaign uses the theory of reasoned action model of behaviour change. If someone who has a drug addiction views the FRANK health education campaign they might recognise that they have a problem and their addiction can cause serious harm to their body and they may also realise that if they follow the advice that FRANK gives, cause positive effects. If they believe that by following the advice provided by FRANK will cause positive effects, such as their health improving, may be improve their social life and mental health etc. they might have a positive attitude towards the behaviour of stopping taking drugs. Other people who are important to the individual who is thinking about stopping taking drugs may also view the campaign and believe it will have positive effects for the individual trying to stop taking drugs. By both the person who wants to stop taking drugs and the subjective norm having a positive attitude it may mean that the person will stop taking drugs. The FRANK health education campaign also definitely uses the stages of change model. When an individual tries to stop taking drugs they go through the stages of change cycle. To start with the individual might not be planning to change their behaviour, as they are not aware or not aware enough of the damage that taking drugs can do and that they have a drug problem. This is the pre-contemplation stage. They may then begin becoming aware that they do have a drug problem and they are considering stopping taking drugs, but they have not committed themselves to do something about stopping taking drugs yet. This is the contemplation stage. At the preparation stage the individual is planning to do something about stopping taking drugs, however they have not done anything yet. The individual may then make changes to their, to help them overcome their drug problem, for example starting to receive talking therapies when they can talk about their drug problem, motivational treatment approaches, cognitive behavioural therapy, group therapy, being prescribed a safer alternative/substitute to the problem or drug, such as methadone instead of heroin. This is the action stage. At the stage of maintenance the individual will work to try and stop themselves relapsing by taking drugs again and they also find what they have achieved throughout the action of not taking drugs. The person may have to go through the process several times before they are successful in fully recovering from their drug addiction. Not everyone has the same ability to change their health behaviours. This is due to social and economic factors. The social and economic context can influence the ability of health education campaigns to change behaviour in relation to health.

Friday, August 30, 2019

Impact of integrated marketing communication on brands Essay

Figure 1 As mentioned above, having a good and effective brand can be achieved by various factors and approaches and one of these is through integrated marketing communication. Integrated marketing communication is known as a strategic coordination of multiple communication voices. The objective of this is to optimize the effect of persuasive communication on both the non-consumer and consumer including trade and professional audiences by coordinating the elements of the marketing mix which include public relations, advertising, package design, promotions and direct marketing (Moore & Thorson 1996). In this regard, it is evident that different approach can be used to ensure that the information of the brands is being conveyed on the targeted market. Furthermore, IMC is also considered as a strategic approach for coordinating all messages and media utilised by a company to collectively affect its perceived brand value (Keegan, Moriarity & Duncan, 1992). In addition, IMC is also referred to as a cross-function approach for generating and sustaining good relationships with clients by controlling or managing strategically all the information sent to them and by purposely encouraging two-way dialogues with target market. Integrated marketing communication has been considered to have an effect in brands. Accordingly, its concept that aims on managing customer relationship has the ability to drive brand value for the company and generate desire results (Clowe & Baack 2004). Through the integrated marketing communication, brands are strategically promoted through the use of various promotional elements as well as marketing process to communicate the message of the company and the brand to the specified target market (Moore and Thorson, 1996). It has been noted that integrated marketing communication is aiming on using direct communication so as to bring behavioural changes among consumers who will purchase a specific brand (Shimp, 2000). Integrated marketing communication also relates the message to the client which brings behavioral changes that helps the brand to establish a strong and tight relation with the target market. Furthermore, the context of integrated marketing communication stresses on the significance of coordination and synergy so as to develop and maintain a strong brand image. By using various communication instruments through the integration concept of marketing communications, industries have the ability to use effective methods to strengthen their brands with their target market and promote stronger brand names to their targets (Kotler, 2004). The IMC approach can also be considered to affect or influence brands positively by giving the brands the chance to sustain their competitive advantage among clients by identifying the most useful and appropriate methods in communicating and establishing good customer relations, which include strong relationships with stakeholders including the employees, investors, suppliers interest groups and the public in general. The main objective of the communicating brand image is to instill a stable and consistent impression among their clients (Fill, 2002). In addition, Integrated Marketing Communication affects brand in a way that it gives the opportunity for the brand to sustain its marketability. The application of integrated marketing tools can enable the brands to communicate with the other target segment. For instance because of the increasing popularity of the world wide web, each company that invests in having their own website will have the chance to reach consumers locally and internationally. This means that the reach of the brand are also extended and expanded in the global market. Through the use of integrated marketing communication. With this, brands of different company will be able to Emerge quickly in the marketplace since the tools and approaches of integrated marketing communication will act as shop window for many businesses today. In addition, this will also permit the clients to easily know important information about a specific brand and to know the different features a specific brand. In addition, the integrated marketing communication is also important in impacting the brand since this will served as a marketing communicating approach for effectively promoting the brands, which will aim to result in a more sales from other distribution channels. The rationale for choosing this marketing channel is that Integrated marketing communication can help brands to meet target consumers worldwide and this could be a great opportunity to be developed and in order for brand to be quickly recognized by the target market. In this regard, the overall campaign element of the brand must be integrated so as to attain the desirable marketing communication objectives. It is said that the target market do not separate or divide sponsorship, advertising, sales promotion, and internet as marketing communications approaches. The clients tend to receive the messages about specific brand from various sources and set up either favourable or unfavourable image of the brand. As far as the company is concerned, the source of the brand information is not that important. What is more essential with is the content of the information conveyed and to what degree the brand promise has been actually delivered to the target market (Fill & Yeshin, 2001). It can be said that all campaign activities lead down to marketing communication and the vital efficient communication is to comprehend how the clients process the vast amount of information that comes their way each and every day and how it helps the brand to reach their target audience (Fill, 2002). In order to sustain the competitive advantage of the specific brand, the market must be able to select only the important message that the management perceive to be important in enhancing the brand value and ignore the rest. If the marketing information is to be selected and process, the management must ensure that it include sensory and life experiences which can easily be determined and change into a unified context, have mental relationships to other categorized brand ideas, and fit into the categories and mental linkages that consumers have already created for themselves. Conclusion The context of branding is said to be useful in terms of comprehending and analysing the competitive position of an organisation. The brand of the company remains an important part of marketing communication as it is mostly recognised by their clients. It can be said that the heavy consideration on marketing communication in branding can create the impression that the brand can be promoted through the use of integrated marketing communication. In this analysis, it shows that brand image strengths and effectiveness is important to sustain the competitive advantage of the company. Much has been said about the importance of having a strong and effective brand image. Based on this analysis, it can be said that a strong and effective brand is something that can influence the choice of the target market and meets the brand personality provided. It can be said that each organisation must have a brand image which addresses the dimensions and characteristics of strong and effective brand. In order to achieve this, the company must be able to manage the brand efficiently. By and large, it can be said that brands have many useful attributes. A brand can be used by the company as promoting recall, as assets, in providing premium and quality in the market, and in generating perceived differentiation. It can also be said that brands is a complex phenomena and it can easily be understood using metaphors such as linking brands into a person. Analysis has shown that to be able to have a strong and effective brand, it must be able to meet the needs and demands of the clients and the company itself. In addition, analysis shows that the use of integrated marketing communication is an important aspect to make the brand be more attractive and appealing to the target market. It can be concluded that integrated marketing communication affect the brand’s competitive position by enabling the target market to know more about the brands. And eventually contribute to increasing sales; notably, integrated marketing communication can help in new institutional development and launches of the brand. In addition, the study shows that a strong and effective brand has the ability to produce audiences in a multi-channel environment that enables the company to be known in the global market. Second, strong and effective brands can be an outcome of an effective integrated marketing communication approach. Reference Aaker, D. (1991). Manage Brand Equity. New York: The Free Press. Aaker, D. A. & Joachimsthaler, E. (2000). Brand leadership. New York: Free Press Asher, J. (1997). Promoting brand identity: what’s your name again? ABA Banking Journal, Vol. 89. Bailey, S. , & Schultz, D. (2000). Customer/Brand Loyalty in an Interactive Marketplace. Journal of Advertising Research, 40(3), 41. Balmer, M. T. J. 7 Wilson, A. (1998). Corporate identity: there is more to it than meets the eye. International Studies of Management & Organization, Vol. 28. Biel, A. (1992). How brand image drives brand equity. Journal of Advertising Research, 32, 6-12. Brassington F, Pettitt S (2000). Principles of Marketing. 2nd edition. Harlow: Financial Times Pitman. Clowe& Baack (2007) integrated advertising, promotion and marketing communications 3rd edition. Pearson Prentice Hall. Engel, J. , Blackwell, R. and Miniard, P. (1995). Consumer behavior, 8th ed. Orlando, FL. : The Dryden Press. Fill C (2002) Integrated marketing communication.oxford butterworth heineman Laforet, S. & Saunders, J. (1999). Managing brand portfolios: Why leaders do what they do. Journal of Advertising Research, Vol. 39. Keller, K. L. (1993). Conceptualizing, measuring, and managing customer-based brand equity. Journal of Marketing, 57(January), 1-22. Kim, H. , Kim, W. G. , & An, J. A (2003). The effect of consumer-based brand equity on firms’ financial performance. Journal of Consumer Marketing, 20 (4), 335-351. Kotler, D. (1997). Marketing management: Analysis, planning, implementation and control (9th ed.). Upper Saddle River, NJ: Prentice Hall. Kotler, P. (1999). Marketing management analysis, planning, implementation and control (9th ed. ), Englewood Cliffs, New Jersey, NJ: Prentice. Hall Inc. Kotler et al (2004). Principle of marketing, the European edition. Kotler, P 2001 Marketing Management. Northwestern University: Prentice Hall International, Inc. McCombs, M. (2003) Everything you know about branding is wrong, expert advises: Guess who’s really in charge? Available at [brandharmony. com]. Accessed July 16, 2008.

Thursday, August 29, 2019

Centralized and decentralized research analysis of United States and Essay

Centralized and decentralized research analysis of United States and Japan's educational system - Essay Example o the No Child Left Behind (NCLB) Act, a number of education scholars and practitioners assert that the federal government is pursuing, or possibly already fulfilling a significantly greater function. In the meantime, although the Japanese education espoused the education paradigm of the United States after the Second World War, k-12 education is far more centralized in Japan than in the U.S. Curriculum responsibility is concentrated on the national Ministry of Education, Culture, Sports, Science, and Technology.iii The United States and Japan are two countries that are ranked in the top four for best educational systems in 2010, even though each country uses a different teaching style to achieve success with faculty and students performances.iv However, both nations are lacking key factors to sustain success with students in today’s changing economy. Based on the present government reformed acts in both the United States and Japan educational systems there is a need for (1) more technical skills in basic education, (2) a need to help support teachers and parents to renew the value of education into students, and (3) remove violence from the education environment in order to achieve success in each country’s educational system. In 1856, the United States formed its first kindergarten. Compulsory education, by the 1950s, had become institutionalized, yet the current k-12 education remains in its formative years.v Ever since the establishment in 1979 of the US Department of Education, the structure of k-12 education has been identical to that of at present, but has experienced a chain of modifications to address the evolving requirements of education.vi The education structure of the United States is distinct from several other developed nations. Education is mainly the duty of local and state government, and hence, for instance, there is modest standardization. The independent states have substantial power over the curriculum and over the prerequisites that

Wednesday, August 28, 2019

What is a Social Network Essay Example | Topics and Well Written Essays - 250 words - 2

What is a Social Network - Essay Example Within the healthcare sector, the social media has been effective in the creation of relationships like patient-physician engagement, physician-physician collaboration and has also been a marketing tool for the healthcare institutions (Thielst, 2010). Despite the very many advantages that healthcare institutions have gained from social media usage, there are also some negative elements which are associated with the dependence on this form of communication. The utilization of social media exposes the organization to various threats within the context of IT. The lack of control for the communication makes the information being conveyed to be risky as the source cannot be clearly ascertained and malicious attacks can also be undertaken through the social media. It has, therefore, become important for the healthcare institution to implement strong policies on social media use. Adherence to these policies by the healthcare professionals remains important because it minimizes the risk which the social media exposes the organization (Banerjee, 2015). For healthcare professionals, these policies become the guidelines that could be relied upon the eliminating the legal risk associated with social media threats within the healthcare sector. This enables the professional to maintain their professionalism within the working  environment.

Tuesday, August 27, 2019

Flip The Funnel Reflection paper Research Example | Topics and Well Written Essays - 1250 words

Flip The Funnel Reflection - Research Paper Example The book named ‘Flip the Funnel’ shows a triangular relationship between good old-fashioned fundamentals of business, future-focused vision of the business leaders and most importantly, common sense. This book allows the development of a totally new way of thought thinking and provides space for turning the existing conventional practices upside down in order to develop a new way of thinking (Jaffe, 2010, p. 270). This new methodology has taught me the way in which businesses can expand through shrinking costs. Reduction in costs would also allow shrinking the financial budget. It helps managers to adopt strategies about achieving higher yield from making less investment and incurring lesser costs. This new process focuses not only on profit maximization but also on the needs of the customers, so as to induce them to satisfy their needs by spending their money with the particular company. Underpinnings of the new theory According to my perception, the Flip the Funnel the ory concentrates on two underutilized constituencies in the marketing field; the customer evangelists and the employees of the company. This theory on one hand focuses on the prospective customers that are not exiting customers of the company but who might become loyal customers of the brand in future. On the other hand, it also focuses on the employees. Employees are also customers at the other end of the funnel. This potential has remained unexplored and if exploited can yield good benefits. The method of flipping the funnel is different from the acquisition principle of the traditional marketing funnel that emphasizes on generating new business from new customers (Court, Elzinga, Mulder & Vetvik, 2009). In difficult times the company does not have the opportunity of mitigating risks, cutting costs or make a tradeoff between quality and quantity, since quality helps the company to keep its reputation alive. On the contrary, the company has to improve its commitment and make wise i nvestment decisions. All activities of the business affect customer decision and therefore affect the future performance of the business. Although for big businesses the effect of these activities sometimes appears to be insignificant, often actually they are game changing. I have learnt from the book that customer experience, short term interactions between the company (its brand, products and customer service) and its customers and long term business performance, are inter-related. Therefore these activities are potential enough to significantly impact future sales; both in terms of repeat purchases by the existing customers and also new business generated from the existing customers. Therefore, Jaffe has put it convincingly that there should be no hesitation regarding flipping the funnel in modern business activities but, the doubt should be cleared regarding the appropriate time at which the funnel can be flipped. Also the question, that how much the tunnel should be flipped, ha s to be answered well ahead of actually practicing the method of turning the traditional marketing tunnel (Jaffe, 2010, p. 271). This new method advocates a strong idea; a company-customer pact should be followed that imbues the idea of partnership between these two parties. New approaches to marketing The book demonstrates several examples by

Monday, August 26, 2019

Borderline Personality Disorder Research Paper Example | Topics and Well Written Essays - 2500 words

Borderline Personality Disorder - Research Paper Example When a person has borderline personality, they are unable to control the emotions that they want to feel, frequently displaying emotions that are inappropriate for any given situation. Borderline personality alters the way in which a person views themselves, their surroundings, and their relationships with others. One of the first signs that someone may be suffering from borderline personality disorder is that they begin to look down on themselves, regarding themselves as evil or worthless, or feeling as though they do not exist at all. The person becomes insecure and loses their sense of self-worth. This often leads to problems within the work area, family, or intimate relationships. One moment the person may completely adore someone, and then the next moment they may want absolutely nothing to do with them (Kreisman & Straus, 1991); these feelings can also describe how a person feels about themselves. Someone being effected by borderline personality disorder cannot decide how they really feel about someone, and even if their explanations of their feelings to themselves make sense, their emotions often say something entirely different. To make matters worse, thei r emotions change from day to day, so they can never pinpoint their honest feelings. Other symptoms of borderline personality disorder include risky behavior, such as unsafe sex, gambling, drug and alcohol use and abuse, and reckless driving, as well as a difficulty in controlling the impulses to engage in the aforementioned activities. Intense emotions that come and go often, uncalled for anger and negativity, and harsh but random spikes of depression or anxiety, and suicidal thoughts and attempts are also symptoms that have been linked to borderline personality disorder. One of the more common symptoms is a fear of being alone, as a person with borderline personality realizes that they are pushing people away without that being their intentions, yet they are not sure how to make their emotions

Sunday, August 25, 2019

Dynamic Regression A Simulation Exercise Math Problem

Dynamic Regression A Simulation Exercise - Math Problem Example From the chart it is also the drop is also evident in the market and MOTOR returns and this shows that a drop in the market returns will also signify a drop in the returns of the stocks in the market. Finally from the chart it is evident that there was a decline in the market returns in 1987 showing that returns for the other stocks also declined. We use 120 0bservations to estimate the model estimate the model rjt = j + jrmt + Ujt for both stocks, we use MOTOR return data for the year 1976 to 1985, after estimation sung the TSM software the results show that rjt = 0.00255 + 0.7193 rmt the above model means that is we hold all factors constant and the market return level is equal to zero then the MOTOR stock return will be 0.00255, also if we hold all factors constant and we increase the market return level by one unit then the MOTOR stock return level will increase by 0.7193 units. ... The above model means that is we hold all factors constant and the market return level is equal to zero then the GPU stock return will be 0.00063, also if we hold all factors constant and we increase the market return level by one unit then the GPU stock return level will increase by 0.4297 units. The R squared for this model is 0.0854and this means that 8.54% of deviations in the dependent variable are explained by the independent variable. The correlation of determination R squared value for this model depicts a weak relationship between the explanatory variable and the dependent variable. Hypothesis testing: We test hypothesis for the estimated coefficients in the two models, MOTOR model: rjt = 0.00255 + 0.7193 rmt MOTOR model Constant: Null hypothesis: = 0 Alternative hypothesis 0 Standard error: 0.00737 Coefficient: 0.00255 T calculated = 0.00255 / 0.00737 = 0.34599 T critical at 95% level of test = 1.95996 When the T calculated value is less than the T critical value we accept the null hypothesis, in the above case therefore we accept the null hypothesis that = 0 and therefore the constant is not statistically significant at 95% level of test. Motor Model Slope: Null hypothesis: = 1 Alternative hypothesis: 1 Standard error: 0.12481 Coefficient: 0.7193 T calculated = 1- 0.7193/ 0.12481= 2.249 T critical at 95% level of test = 1.95996 When the T calculated value is greater than the T critical value we reject the null hypothesis, in the above case therefore we reject the null hypothesis that = 0 and therefore the constant is statistically significant at 95% level of test. GPU model: rjt = 0.00063 + 0.4297 rmt GPU model Constant: Null hypothesis: = 0 Alternative hypothesis 0 Standard error: 0.00841 Coefficient: 0.00063

Saturday, August 24, 2019

Leadership and Management as Some of the Most Important Aspects of the Essay

Leadership and Management as Some of the Most Important Aspects of the Organizational Structure - Essay Example The researcher states that leadership refers to the process by which an individual has the ability to enlist the support of others so as to accomplish a common task. Management, on the other hand, is described as the process of getting resources together to enable one to accomplish a certain task. According to F.T Taylor, management is â€Å"the art of knowing what you want to do and then seeing that it is done the best and cheapest way†. The management process comprises of organizing, planning, directing, staffing and controlling an entity so as to attain a certain objective. Some researchers have identified differences between leadership and management. Warren Bennis listed a number of differences between the two. The first of these differences was that a manager maintains and administers while a leaders work is to develop and innovate. He also said that while the manager is a copy, a leader is an original. Managers mostly focus on systems and structure while the leader†™s focus is on the people. While managers rely on control and imitating, the leader originates and inspires trust. Management is characterized by short-term views while it is the exact opposite when it comes to leadership. The basic duty of management is to do things right while for leadership, it is to dot he right thing. Although these differences exist between leadership and management, the two must go hand in hand so as to ensure maximum efficiency within an organization. The subject of leadership and management has attracted much attention from researchers who have identified different approaches to the two. Most of these approaches are quite significant and relevant in today’s world as will be discussed in this paper. An important theory of leadership is Bass Theory which states that the way people become leaders can be explained in three basic points. The first is that certain personality traits may naturally lead people to leadership roles. This is also called the T rait theory. An important occurrence or crisis may cause a person to exhibit leadership qualities never seen before. This is also referred to as the Great Events Theory. The third point is that a person people can choose to become a leader in learning leadership skills.

The Turning Point of Tet 1968 Essay Example | Topics and Well Written Essays - 500 words

The Turning Point of Tet 1968 - Essay Example Startled at such an unexpected idea, I felt uncomfortable and I began thinking of declining the proposed agreement to appoint me for the post yet seeing that it would be an awkward moment to do so and that the majority are not quite drawn to encourage someone else for it, I gave in. In the process, however, I discovered that the type of work assigned to me allowed flexibility that if I knew how to manage time and energy wisely, I could adjust my level of productivity within a range of efficiencies depending on the work amount, my available relevant skills, as well as my ability to delegate tasks to others. This is the point at which I recognized having the capacity to think strategically. Even with my current non-military organization, knowing that everybody is focused on individual assignments and that my fellow teammates normally maintain a passive attitude in examining my activities, I gain the leverage of controlling my behavior toward workload. By ‘strategic thinking’, I could execute around the essentials of concentrating my efforts on situations that call for my knowledge and capability at the optimum so that the fulfilment I earn would serve as my drive for the next projects. In this manner, I often yield the chance of being able to reserve time and energy on human relations which enable me to address general interests and win the confidence of many to whom I have been able to delegate some jobs. Due to the bond of trust established, it becomes much easier to communicate with people and have them naturally seek grounds for understanding schemes for the committee which I carry out under my own terms. Moreover, I could detect strategic thinking in the course of spontaneously developing the trait of ignoring negative impressions attached with temporary unpleasant acts or intentions. With your own understanding of what cooperation and support you need from others involved, what do you need from others in their roles to accomplish your own work

Friday, August 23, 2019

SHORT BIOGRAPHY HISTORY Essay Example | Topics and Well Written Essays - 750 words

SHORT BIOGRAPHY HISTORY - Essay Example Furthermore, living near the borders must have accentuated his â€Å"difference† from the dominant white class. Nevertheless, it could be that because of his difference that he enjoyed life from another perspective. In â€Å"The Secret Lion,† Rà ­os shows that human nature and nature nurtured his intellectual, social, and emotional development as a biracial adolescent. Human nature’s tendency to seek for freedom and opportunity dominated Rà ­os’ teenage life. When he and his friend Sergio found a â€Å"cannonball,† they called it a â€Å"lion† (Rà ­os par. 1). The title says it is a secret lion, because they told no one of this â€Å"treasure† that they found. This lion represents freedom and opportunities. It allowed Alberto and Sergio the freedom to own something no one can take away from them. Being twelve years old, they know that adults will only confiscate their discovery. Rà ­os says: â€Å"That’s the way it works with little kids†¦Junior high kids too† (6). Adults are shown as thieves of innocent happiness. It is up to Alberto to use his human nature to protect what he thinks is his. So they take this cannon ball and hide it and never tell anyone about it, especially adults. The â€Å"lion† also stands for something mystically strong and perfect. Having this ball in their poss ession gives them the opportunity to feel something â€Å"perfect† in their lives. It is round and therefore â€Å"perfect and it spreads perfection to its beholders (5). It is â€Å"heavy† and they feel its importance. If they have something important, then they too are important. They do not have to feel smaller, as some minorities do in dominant white cultures. They can be round and perfect; they can be special like this lion. When Alberto says that this ball changed them, he implies that it made them â€Å"roar† (1). They have found a symbol of empowerment. A cannon ball explodes. It has inert power that is waiting to be released. Alberto must have felt this lion is him

Thursday, August 22, 2019

Windows environment Essay Example for Free

Windows environment Essay If you get these 10 settings right, and youll go a long way toward making your Windows environment more secure. Each of these falls under the Computer Configuration\Windows Setting\Security Settings leaf. Rename the Local Administrator Account: If the bad guy doesnt know the name of your Administrator account, hell have a much harder time hacking it. Disable the Guest Account: One of the worst things you can do is to enable this account. It grants a fair amount of access on a Windows computer and has no password. Enough said! Disable LM and NTLM v1: The LM (LAN Manager) and NTLMv1 authentication protocols have vulnerabilities. Force the use of NTLMv2 and Kerberos. By default, most Windows systems will accept all four protocols. Unless you have really old, unpatched systems (that is, more than 10 years old), theres rarely a reason to use the older protocols. Disable LM hash storage: LM password hashes are easily convertible to their plaintext password equivalents. Dont allow Windows to store them on disk, where a hacker hash dump tool would find them. Minimum password length: Your minimum password size should be 12 characters or more. Dont bellyache if you only have 8-character passwords (the most common size I see). Windows passwords arent even close to secure until they are 12 characters long and really you want 15 characters to be truly secure. Fifteen is a magic number in the Windows authentication world. Get there, and it closes all sorts of backdoors. Anything else is accepting unnecessary risk. Maximum password age: Most passwords should not be used longer than 90 days. But if you go to 15 characters (or longer), one year is actually acceptable. Multiple public and private studies have proven that passwords of 12 characters or longer are relatively secure against password cracking to about that length of time. Event logs: Enable your event logs for success and failure. As Ive covered in this column many times, the vast majority of computer crime victims might have noticed the crime had they had their logs on and been looking. Disable anonymous SID enumeration: SIDs (Security Identifiers) are numbers assigned to each user, group, and other security subject in Windows or Active Directory. In early OS versions, non-authenticated users could  query these numbers to identify important users (such as Administrators) and groups, a fact hackers loved to exploit. Dont let the anonymous account reside in the everyone group: Both of these settings, when set incorrectly, allow an anonymous (or null) hacker far more access on a system than should be given. These have been disabled by default since 2000, and you should make sure they stay that way. Enable User Account Control: Lastly, since Windows Vista, UAC has been the No. 1 protection tool for people browsing the Web. I find that many clients turn it off due to old information about application compatibility problems. Most of those problems have gone away, and many of the remaining ones can be solved with Microsofts free application compatibility troubleshooting utility. If you disable UAC, youre far closer to Windows NT security than you are a modern operating system.

Wednesday, August 21, 2019

Price Elasticity in Air Travel

Price Elasticity in Air Travel Introduction: Elasticity is define as the quality sth has being able to stretch and return to its original size and shape. (Oxford advanced learners dictionary 6th edition). In Physics elasticity is defined as the property of a substance that enables it to change its length, volume, or shape in direct response to a force effecting such a change and to recover its original form upon the removal of the force. (dictionaryreference.com). Suppose that your employer allows you to work extra hours more after your contracted hours for extra pay at the end of the month, the amount of extra money you will earn at the end of the month will depend on how much more extra hours you are able to work. Then how responsive you are to this offer can be seen as elasticity. Therefore I will define elasticity as the measure of degree of responsiveness of any variable to extra stimulus. From my example above elasticity can be calculated as Em = percentage of extra money you earn/percentage of extra hours worked. The concept of elasticity can be used to measure the rate or the exact amount of any change. In economics elasticity is used to measure the magnitude of responsiveness of a variable to a change in its determinants (sloman) such as (demand and supply) of goods and services. For the purpose of this essay am going to be examining the concept of elasticity of demand and supply in the airline industry. Types of Elasticity Price or own price Elasticity of demand Income elasticity of demand Cross elasticity Price or own price elasticity of demand It is the measure of the degree of sensitivity or responsiveness of quantity demanded is to a change in price of a product (Edgar.K. browing). Our assumption often is that all demand curves have negative slopes which means the lower the price the higher the quantity demanded but sometimes the degree of responsiveness vary from product to product. For example a reduction in the price of cigarettes might have only bring about a little increase in quantity demanded whereas a supermarket reduction in the price of washing up liquid will produce a big increase in quantity demanded The law of demand and even Common sense tells us that when prices change, the quantities purchased will change too. However, by how much? Businesses need to have more precise information than this they need to have a clear measure of how the quantity demanded will change as a result of a price change. Price elasticity is calculated as the percentage (or proportional or rate) of change in quantity demanded divided by the percentage (or proportional or rate) of change in its price. Symbolically: Pà Ã¢â‚¬Å¾D=%ΆQ/%à ¢Ã‹â€ Ã¢â‚¬  p Here à Ã¢â‚¬Å¾ denotes elasticity and à ¢Ã‹â€ Ã¢â‚¬   Graphically Elasticity measure in percentage because it allows a clear comparison of changes in qualitatively different things which are measured in two different units (sloman). It is the only sensible way of deciding how big a change in price or quantity, so their calls a unit free measurement. Generally when the prices of good increases the quantity demanded decreases, thus either of the number will be negative which after division will end up in a negative result, due to this fact we always ignore the sign and just concentrate on the absolute value, ignoring the sign to tell us how elastic demand is. The larger the elasticity of demand, the more responsive the quantity demanded is of elasticity. Degrees of elasticity Perfectly elastic Highly elastic Relatively elastic Relatively inelastic Highly inelastic Perfectly inelastic Elastic Demand Elastic demand occurs when quantity demanded changes by bigger percentage than price.(Sloman) Here customer has lot of other alternative. The value is always higher than 1, the change in quantity has a bigger effect on total consumer spending than in price. For example if there is a reduction in the price of a bottle of washing up liquid say from  £1.00 to 50p people will buy more probably to store up, in doing this they will end up spending more on the product than they will do on a normal day. An Inelastic Demand Elasticity in airline industry The airline industry is deeply impacted by the elasticity of demand, externalities, wage inequality, and monetary, fiscal, and federal policies. The elasticity of demand is based purely on current market conditions, thcustomers September 11th tragedy had a negative affect on the entire travel industry. It impacted the fiscal and monetary policies, supply and demand, and it created staffing problems nationwide. The rate of wage inequality is improving due to legislation that has created a pay increase in participating cities across the United States. The airline industry is viewed has being unstable because it is based on current market conditions, and the market is always changing. purpose for travel, and available substitutes. Externalities continue to influence the elasticity of demand. The Elasticity of Demand The airline industry is an extremely unstable industry because it is highly dependant upon current market conditions. Events such as inflation, terrorist attacks, and the price of oil have greatly influenced the demand for airline tickets throughout the years. Competition consistently affects the price of airline tickets because it gives the customer other options. Substitutes that are existence is traveling by train, car, or avoiding travel whenever possible. Customers have resorted to all named substitutes during turbulent times in our economy. The elasticity of demand is greatly affected by the customers purpose for travel. Airline customers typically fly for business or pleasure. With the wave of technology, a large percentage of business travel has been eliminated to conserve spending. Elasticity In the airline industry, price elasticity of demand is separated into two segments of consumers and is considered to be both elastic and inelastic. A good example of how elastic demand is related to the airline industry is in relation to travel for pleasure. Pleasure travellers will be affected by the amount of travel they do based on the demand increase or decrease, affected by prices that lower with high demand or prices that rise with low demand; directly attributed to competition in this market (Gerardi Shapiro, 2007). Inversely, the business traveller would apply to an inelastic demand for this market. This has shown by demand increases or decreases, as well as the price distribution attributed, which has little effect on the buying power of the business person (Gerardi Shapiro, 2007). Furthermore, Voorhees and Coppett (1981) explain that elastic demands exist for the pleasure traveler due to demand increase rising while prices lower and vise versa. The business traveler exper iences an inelastic demand due to the quantity of service demanded and quantity has not decreased as prices have risen. In other words, this travel is seen as a necessary business tool, not affected by price changes in the demand curve. As we have seen, the airline industry is extremely price elastic. Small shifts in prices have dramatic effects on the consumer base. Externalities, such as noise ordinances, can cause negative effects, driving cost upward and threatening loss in demand due to a price sensitive customer base. Since deregulation, competition in the economy have kept prices in the industry low and have caused airlines to force cuts in areas such as wages; contributing to a growing concern of wage inequality. Refrences: Gerardi, K., Shapiro, A. (2007, April). The Effects of Competition on Price Dispersion in the Airline Industry: A Panel Analysis. Working Paper Series (Federal Reserve Bank of Boston), 7(7), 1-46. Retrieved April 30, 2008, from Business Source Complete database. Mankiw, N. G. (2004). Principles of economics (3rd ed.). Chicago, IL: Thomson South-Western. Morrison, S., Watson, T., Winston, C. (1998). Fundamental Flaws of Social Regulation: The Case of Airplane Noise. Retrieved May 8, 2008, from http://www.brookings.edu/~/media/Files/rc/papers/1998/09_airplane_winston/09_airplane_winston.pdf Voorhees, R., Coppett, J. (1981, Summer). New Competition for the Airlines. Transportation Journal, 20(4), 78-85. Retrieved April 30, 2008, from Academic Search Premier database. The airline industry is a private good. Mankiw (2004), states that private goods are excludable and rival goods. One needs to see through the anti-trust laws and regulations that tempt some to call the industry a natural monopoly; airlines still reserve the right to administer price and destination. The airline industry shows that it is an excludable good by having the power to place prices on fares and having the ability to refuse service to any person for whatever the reason. The airline industry also shows that it is a rival good because when someone purchases fare for a seat, it diminishes the ability for another person to get a seat on the plane. Because the airline industry is a private good, in a competitive market place, prices, supply, and demand are very sensitive to new policies or tax incidences placed on them. Associated content.com viewd 18/11/10 WordPress.comThis phenomenal increase in the demand for domestic air travel is not surprising. Airfare is an expensive commodity that few people can afford or are willing to pay for it. Also, a typical consumer may not be able to avail such commodity regularly. It takes time for the consumer to demand for it again. In economics, this scenario is being explained by its ELASTICITY. The concept of elasticity is being referred as the responsiveness of the quantity demanded of a good or service to a change in its price, income, or cross price. This post will provide a better understanding on this matter, specifically the price elasticity. Analysis Below consists of indicators that determines the elasticity of a good/service. Domestic air travel has been employed as a sample commodity. Substitutes. (The more substitutes it has, the higher the elasticity.) Airlines have numerous substitutes such as land or sea transportation. Percentage of Income. (The higher the percentage that the products price is of the consumers income, the higher the elasticity.) Airfares are too expensive relative to household income. Necessity. (Basic goods have lower elasticity.) Airline tickets are luxury goods. Duration. (The longer a price change holds, the higher the elasticity.) Airline fare does not change for a long time. Breadth of Definition. (The broader the definition, the lower the elasticity.) Domestic airline travel has more specific definition than ordinary air transportation. 1. Introduction The purpose of this study is to report on all or most of the economics and business literature dealing with empirically estimated demand functions for air travel and to collect a range of fare elasticity measures for air travel and provide some judgment as to which elasticity values would be more representative of the true values to be found in different markets in Canada. While existing studies may include the leisure business class split, other important market distinctions are often omitted, likely as a result of data availability and quality.[3] One of the principal value added features of this research and what distinguishes it from other surveys, is that we develop a meta-analysis that not only provides measures of dispersion but also recognizes the quality of demand estimates based on a number of selected study characteristics. In particular, we develop a means of scoring features of the studies such as focus on length of haul; business versus leisure; international versus domestic; the inclusion of income and inter-modal effects; the age of the study; data type (time-series versus cross section) and the statistical quality of estimates (adjusted R-squared values). By scoring the studies in this way, policy makers are provided with a sharper focus to aid in judging the relevance of various estimated elasticity values.[4] 2. Elasticity in the Context of Air Travel Demand. Elasticity values in economic analysis provide a units free measure of the sensitivity of one variable to another, given some pre-specified functional relationship. The most commonly utilized elasticity concept is that of own-price elasticity of demand. In economics, consumer choice theory starts with axioms of preferences over goods that translate into utility values. These utility functions define choices that generate demand functions from which price elasticity values can be derived. Own-price elasticity of demand concept airtrav_2e.gif (1,979 bytes) Therefore elasticities are summary measures of peoples preferences reflecting sensitivity to relative price levels and changes in a resource-constrained environment. The ordinary or Marshallian demand function is derived from consumers who are postulated to maximize utility subject to a budget constraint. As a goods price changes, the consumers real income (which can be used to consume all goods in the choice set) changes. In addition the goods price relative to other goods changes. The changes in consumption brought about by these effects following a price change are called income and substitution effects respectively. Thus, elasticity values derived from the ordinary demand function include both income and substitution effects.[5] Own-price elasticity of demand measures the percentage change in the quantity demanded of a good (or service) resulting from a given percentage change in the goods own-price, holding all other independent variables (income, prices of related goods etc.) fixed. The ratio of percentage changes thus allows for comparisons between the price sensitivity of demand for products that might be measured in different units (natural gas and electricity for example). Arc price elasticity of demand calculates the ratio of percentage change in quantity demanded to percentage change in price using two observations on price and quantity demanded. Formally this can be expressed as: Equation(1) where: Equationrepresent the observed change in quantity demanded and price Equationrepresent the average price and quantity demanded. The elasticity is unitless and can be interpreted as an index of demand sensitivity; it is measuring the degree to which a variable of interest will change (passenger traffic in our case) as some policy or strategic variable changes (total fare including any added fees or taxes in our case). In the limit (when Equationare very small) we obtain the point own-price elasticity of demand expressed as: Equation(2) where: Q(P,S) is the demand function P = a vector of all relevant prices p = the goods own-price. q = equals the quantity demanded of the good S = a vector of all relevant shift variables other than prices (real income, demographic characteristics etc.) We expect own-price demand elasticity values to be negative, given the inverse relationship between price and quantity demanded implied by the law of demand, with absolute values less than unity indicating inelastic demand: a less than proportionate response to price changes (relative price insensitivity). Similarly, absolute values exceeding unity indicate elastic or more sensitive demand: a more than proportionate demand response to price changes (relative price sensitivity). The ratio of change in quantity demanded to change in price [equation (1)] highlights that elasticity measures involve linear approximations of the slope of a demand function. However, since elasticity is measuring proportionate change, elasticity values will change along almost all demand functions, including linear demand curves.[6] Estimation of elasticity values is therefore most useful for predicting demand responses in the vicinity of the observed price changes. As a related issue, analysts need to recognize that in markets where price discrimination is possible aggregate data will not allow for accurate predictions of demand responses in the relevant market segments. In air travel, flights by a carrier are essentially joint products consisting of differentiated service bundles that are identified by fare classes. However the yield management systems employed by full-service carriers (FSCs) also create a complex form of inter-temporal price discrimination, in which some fares ( typically economy class) decline and some increase (typically full-fare business class) as the departure date draws closer. This implies that ideally, empirical studies of air travel demand should separate business and leisure travellers or at least be able to include some information on booking times in order to account for this price discrimination, and that price data should be calibrated for inter-temporal price discrimination: for example, the use of full-fare economy class ticket prices as data will overestimate the absolute value of the price elasticity coefficient. Within the set of differentiated service bundles that comprise each (joint product) flight, the relative prices are important in explaining the relative ease of substitution between service classes. Given the nature of inter-temporal price discrimination for flights, the relative price could also change significantly in the time period prior to a departure time. The partial derivative in (2) indicates that elasticity measures price sensitivity independent of all the other variables in the demand function. However when estimating demand systems over time, one can expect that some important shift variables will not be constant. It is important that these shift variables be explicitly recognized and incorporated into the analysis, as they will affect the value of elasticity estimates. This will also be true with some cross-sectional studies or panels.[7] In particular changes in real income and the prices of substitutes or complements will affect demand. In air travel demand estimations, income and prices of other relevant goods should be included in the estimation equation. Alternative transportation modes (road and rail) are important variables for short-haul flights, while income effects should be measured for both short and long-haul. The absence of an income coefficient in empirical demand studies will result in own-price elasticity estima tes that can be biased. With no income coefficient, observed price and quantity pairs will not distinguish between movements along the demand curve and shifts of the demand curve.[8] The slope of a demand function, which affects the own-price elasticity of demand, is generally expected to decrease (become shallower) with: The number of available substitutes; The degree of competition in the market or industry; The ease with which consumers can search and compare prices; The homogeneity of the product; The duration of the time period analyzed.[9] Given the implied relationships above, any empirical demand study should carefully define market boundaries to include all relevant substitutes and complements and to exclude products that might be related through income or other more general variables. In air travel, ideally market segment boundaries should be defined by first separating leisure and business passengers and second long-haul and short-haul flights. The reason is that we expect different behaviour in each of these markets. Within each of these categories, distinctions should then be made between the following: Connecting and origin-destination (O-D) travel; Hub and non-hub airports;[10] Routes with dominant airlines and routes with low-cost carrier competition. In addition, for the North American context, long-haul flights should be further divided into international and domestic travel (within continental North America). These market segment boundaries are illustrated in figure 2.1 below, which also highlights the relative importance of intermodal competition for short-haul travel. While distinctions in price and income sensitivity of demand between business and leisure or long and short-haul travel are more intuitive, other distinctions are perhaps less obvious. If available, data that distinguishes between routes, airlines and airports would provide important estimates of how price sensitivity is related to the number of competing flights and the willingness to pay of passengers utilizing a hub-and-spoke network, relative to those traveling point-to-point, more commonly associated with low cost carriers. To the extent that existing studies assume that each passenger observation represents O-D travel, they will not be capturing fare premiums usually associated with hub-and-spoke networks and full service carriers, nor will they necessarily capture the complete itinerary of travellers utilizing a number of point-to-point flights with a low cost carrier. For example, a passenger who travels from Moncton to Vancouver with Air Canada, and utilizes the hub at Pears on International airport, is being provided with a number of services that includes baggage checked through to the final destination and frequent flyer points as well as a choice in flights and added flight and ground amenities. The fare for Moncton-Vancouver includes a premium for these services. Now consider a passenger that is travelling with WestJet from Moncton to Hamilton, and then with JetsGo from Toronto Pearson Airport to Vancouver. In this case there are no frequent flyer points to be attained and baggage has to be collected and re-checked after a road transfer between Hamilton and Pearson International. Although the origin and destination is the same for these passengers, the itineraries are significantly different. In many cases data used for demand estimates would not able to account for these differences. Route-specific data can also capture competition that may exist between airports and the services they offer as well as airlines. This may be especially true for certain short-haul routes where intermodal competition (road and rail) can play an important role in shaping air travel demand. 3. Measurement Issues Oum et al. (1992) provide a valuable list of pitfalls that occur when demand models are estimated and therefore affect the interpretation of the elasticity estimates from these empirical studies. 1. Price and Service Attributes of Substitutes: Air travel demand can be affected by changes in the prices and service quality of other modes. For short-haul routes (markets) the relative price and service attributes of auto and train would need to be included in any model; particularly for short-haul markets. Failure to include the price and service attributes of substitutes will bias the elasticity. For example, if airfares increase and auto costs are also increasing, the airfare elasticity would be overestimated if auto costs were excluded. 2. Functional Forms: Most studies of air travel demand use a linear or log-linear functional specification. Elasticity estimates can vary widely depending on the functional form. The choice of functional form should be selected on the basis of statistical testing not ease of interpretation. 3. Cross-Section vs. Time-series Information: In the long run demand elasticities for non-durable goods and services are larger in absolute terms, than in the short run. This follows because in the long run there are many more substitution possibilities that can be used to avoid price increases or service quality decreases. In effect there are more opportunities to avoid these changes with substitution possibilities. Data tends to be cross-sectional or time-series although more recently panels have become available. A panel is a combination of cross-section and time-series information on several routes for a multi-year period is a panel. Cross-sectional information is generally regarded as indicating short run elasticities while time-series data is interpreted as long run elasticities. In time-series data the information reflects changes in markets, growth in income, changes in competitive circumstances, for example. Policy changes should rely on long run elasticities since these ar e long run impacts that are being modelled. Short run elasticities become important when considering the competitive position of firms in a highly dynamic and competitive industry. 4. Market Aggregation/Segmentation: As the level of aggregation increases the amount of variation in the elasticity estimates decreases. This occurs because aggregation averages out some of the underlying variation relating to specific contexts. Since air travel market segments may differ significantly in character, competition and dominance of trip purpose, interpreting a reduction in variation through aggregation as a good thing would be erroneous. Such estimates might have relatively low standard deviations but would be also be relatively inaccurate when used to assess the effect of changes in fares in a specific market. 5. Identification Problem: In most cases only demand functions are estimated in attempts to measure the demand elasticity of interest. However, it is well known that the demand function is part of a simultaneous equations system consisting of both supply and demand functions. Therefore, a straightforward estimation of only the demand equation will produce biased and inconsistent estimates. The problem of identification can be illustrated by describing the process by which fares and travel, for example, are determined in the origin-destination market simultaneously. To model this process in its entirety, we must develop a quantitative estimate of both the demand and supply functions in a system. If, in the past, the supply curve has been shifting due to changes in production and cost conditions for example, while the demand curve has remained fixed, the resultant intersection points will trace out the demand function. On the contrary, if the demand curve has shifted due to changes in personal income, while the supply curve has remained the same, the intersection points will trace out the supply curve. The most likely outcome, however, is movement of both curves yielding a pattern of fare, quantity intersection points from which it will be difficult, without further information, to distinguish the demand curve from the supply curve or estimate the parameters of either.[11] Earlier we identified sources of bias that can arise from problems with aggregation, data quality, implicit assumptions of strong separability among others. Almost all demand studies have an implied assumption of strong separability in that they only consider aviation markets in the analysis. Such studies in effect constrain all changes or responses in fares or service to be wholly contained in the aviation component of peoples consumption bundle. The paper by Oum and Gillen (1986) is the one exception where consideration of substitution with other parts of consumption was included in the modelling. It would be difficult to extract a conclusion from this one study as to existence, degree and direction of bias in elasticity estimates when other parts of consumption are and are not included in the modelling. However, having said this, an inspection of the elasticity estimates from this study shows they are not significantly different than other time-series estimates. 3.1 Data Issues Elasticity estimates depend critically on the quality and extent of the data available. Currently, the best data for demand estimation is the DB1A 10 percent ticket sample in the US, but even this data has some problems.[12] The DB1A sample represents 10 percent of all tickets sold with full itinerary identified by the coupons attached to the ticket. However with electronic tickets, as more and more tickets are being sold over the Internet, there is a growing portion of overall travel that may not be captured in the sample. This means that the proportion is not 10 percent but something less.[13] Other important considerations are the amount of travel on frequent flyer points, by crew and airline personnel. In Canada we have poor quality data because it is incomplete, even if it were accessible. Airports collect traffic statistics but these data make it very difficult to distinguish OD and segment data. Airlines report traffic data to Statistics Canada (or are supposed to) but these data do not include fare information or routing. Knowing the itinerary or routing is important because of differences in service quality and hubbing effects. Fare data is also more useful than yield information since it identifies the proportion of people travelling in different fare classes. Yet, in many cases yield information is used as a weighted average fare. There is also the problem that carriers of different size may have different reporting requirements. Some researchers and consultants have been cobbling together data sets for analysis by using the PBX clearing house information. These data are limited and apply only to those airlines that are members of IATA.[14] The current public data available in Canada simply does not permit estimation of any demand models. Besides demand side data it is also important to have supply side information. Elasticity estimates should emerge from a simultaneous equations framework. This data is more accessible through organizations like the OAG[15], which provide information on capacity, airline and aircraft type for each flight in each market.[16] These data measure changes in capacity, flight frequency and timing of flights. One study, which undertook an extensive survey to collect multimodal data,[17] was the High Speed Rail study sponsored jointly by the Federal, Ontario and Quebec governments. This study, which had three different demand modelling efforts, examined the potential for High Speed Rail demand, and subsequent investment, in the Windsor-Quebec corridor. The analysis included intermodal substitution between air, rail, bus and car. The study was undertaken in the early 1980s. However, it is not possible for public access to any of the technical documents that would allow an assessment of the study. Attempts in the past to obtain access to the data have proven fruitless. 3.2 Distinguishing Elasticity Measures As we have stated, price elasticity measures the degree of responsiveness to a change in own or other prices (fares). However, care must be exercised in interpreting the elasticity since they differ according to how they have been estimated. Many empirical studies of air travel demand estimate a log-linear model. In evaluating such studies, it is important to keep in mind that the empirical specification implies a certain consumer preference structure because of the duality between utility functions and demand functions. It is equally important to remember that empirically estimated demand functions should contain some measures of quality and service differences or quality changes over time. Failure to include metrics for frequent flyer programs, flight frequency, destination choice or service levels in estimating an air demand function can lead to downward bias in the price elasticity estimates. Price elasticities can be estimated for aggregate travel demand as well as modal demand. Figure 3.1 illustrates the differences between aggregate and modal elasticities.[18] Our interest is in modal elasticities not the aggregate amo

Tuesday, August 20, 2019

An Overview of Tourettes Syndrome

An Overview of Tourettes Syndrome Tourettes syndrome When you think of Tourettes what comes to mind?   Tourettes is a common disorder which may start in early childhood. This condition is characterized by physical and verbal tics (Tourette Syndrome Fact Sheet). Tourettes syndrome, also known as TS, first presented itself when a man named Georges Gilles De La Tourette wrote a paper on nine people who exhibited in voluntary motor and vocal tics (Georges Gilles de la Tourette). Tourettes association in the study of this disorder led to it being named after him. Georges Giles was born in the small town of Saint Gervals Les Trois Clochers, he was a French neuropsychiatrist and an expert on epilepsy. Georges was known for crazy media coverage where there was an attempt on his life (Georges Gilles de la Tourette). He was shot in 1893 by Rose Kamper, a former patient of his who had made acquisitions of him of hypnotizing her against her will. He recovered from the gun shot, and his attacker was diagnosed with what is now called paranoid schi zophrenia. He is more famously known for publishing the first writings of people who had Tourettes, simply stating that these tics were random and uncontrollable (Georges Gilles de la Tourette). Many speculate but cause of Tourette syndrome is unknown but there is current research that points to abnormalities in the brain (Tourettes Syndrome). Evidence from twin and family studies proposes that TS is an inherited disorder (Tourette Syndrome Fact Sheet). Symptoms are typically noticed in early childhood between the ages of seven and ten. Genetically TS occurs in people from all ethnic groups and age groups, but males have a higher chance of being affected then females. It is estimated that 200,000 Americans have a severe form of TS, and one in 100 display milder and less complex symptoms such as chronic motor or vocal tics (Tourette Syndrome Fact Sheet). Although the DSM-5 does not directly talk about TS, it does mention disorders that are linked to it. Various people can experience additional problems such as obsessive compulsive behavior, characterized by repetitive behaviors, such as hand washing or checking things repetitively and mental acts like praying, and counting (A merican Psychiatric Association ). Attention deficit-hyperactivity disorder, described by difficulty concentrating and staying on task; learning disabilities, which include reading, writing and arithmetic difficulties; and even sleeping disorders (Tourettes Syndrome). TS is not a psychological disorder but more of a neuropsychiatric disorder; although they are linked together these disorders can come with Tourettes. But on the other hand not everyone with TS will have disorders other than their tics. What is TS you may ask? TS can be divided into two groups, motor tics, and vocal tics; and in those two groups you can have simple and complex motor or vocal tics. Simple motor tics are sudden, brief, repetitive movements that involve a limited number of muscle groups (Tourette Syndrome Fact Sheet). Some of the more common simple motor tics include eye blinking, facial grimacing, shoulder shrugging, and head or shoulder jerking. Simple vocal tics might include repetitive throat-clearing, sniffing, or grunting sounds (Tourette Syndrome Fact Sheet). Complex tics are distinct, coordinated patterns of movements involving several muscle groups (Tourettes Syndrome). Complex motor tics might include facial grimacing combined with a head twist and a shoulder shrug, sniffing or touching objects, hopping, jumping, bending, or twisting. Simple vocal tics may include throat-clearing, sniffing/snorting, grunting, or even barking. The most intense  Ã‚   tics includes motor movements that cause   self-harm such as punching themselves in the face or vocal tics including coprolalia and echolalia which are uttering swear words and repeating the words or phrases of others (Tourette Syndrome Fact Sheet). Some tics are led by an urge or sensation in the affected muscle group, or a need to complete a tic in a certain way or a certain number of times in order to relieve the urge (Tourette Syndrome Fact Sheet). People with TS can sometimes suppress their tics for a short time, but the effort is similar to that of holding back a sneeze. Eventually tension mounts to the point where the tic escapes. Tics worsen in stressful situations; however, they improve when the person is relaxed or absorbed in an activity. In most cases, tics decrease markedly during sleep (Tourettes Syndrome). How can TS be counteracted? Currently, there is no brain test or laboratory test to convincingly prove someone has TS and when it comes to TS there is no cut in stone medication that will cure this disorder completely. Generally, TS is diagnosed by obtaining a description of the tics and evaluating family history and after verifying, the patient has to have had both motor and vocal tics for at least 1 year. Patients, families and physicians need to determine which set of symptoms is most disabling so that appropriate medications and therapies can be used (Tourettes Syndrome). If symptoms do not impair most patients and development proceeds normally then the majority of people with TS will require no medication. On the other hand, medications are available to help when symptoms interfere with functioning but unfortunately, there is no one medication that helps the same person with TS. Some patients who need medication to reduce the symptoms of their tics may be treated with neurolepti c drugs such as haloperidol and pimozide. These medications are usually given in very small doses that are increased slowly until the best possible balance between symptoms and side effects is achieved (Tourette Syndrome Fact Sheet). The most common side effects of neuroleptics include sedation, weight gain, and cognitive dulling, tremors, dystonic reactions (twisting movements or postures), and parkinsonian-like symptoms. People with TS often live healthy, active lives however; Tourette syndrome frequently involves behavioral and social challenges that can harm your self-image. The biological perspective focuses on genetics and your biological processes influencing your behavior (Rathus ). As stated before TS can come with other disorders such as ADHD and Attention Deficit Disorder, and even Obsessive Compulsive Disorder (Tourette Syndrome Fact Sheet). These disorders can make a person with TS behave differently such as; losing ones temper a lot, anger, difficulty paying attention and controlling impulsive behaviors (Tourette Syndrome Fact Sheet). Traditional behaviorist believed that the environment and also personal experiences influences a persons behavior (Rathus ). Stress can often make TS worse in the fact that it makes the tic more rapid (Tourette Syndrome Fact Sheet).   This can also be due to the persons environment.   This is how TS is related to psychology because it can explain the beh avior of someone with TS and how these disorders are linked together. References   American Psychiatric Association . Diagostic And Statastical Manual of Mental Disorders fith edition. DSM-5. Vol. 5. Arlington: American Psychiatric Publishing, 2013. 5 vols. 237. 1 december 2016. Georges Gilles de la Tourette. 2014. Soylent Communications. 18 October 2016. . Rathus , Spencer A. Psychology principles in practice . Austin, 2003. 4. Textbook. Tourette Syndrome Fact Sheet. Ed. Office of Communications and Public Liaison. 5 October 2005. 18 November 2016. . Tourettes Syndrome. Ed. Sussex Publishers. 1 july 2016. Sussex Publishers. 1 december 2016. . (Tourettes Syndrome) Georges Gilles de la Tourette. 2014. Soylent Communications. 18 October 2016. . Rathus , Spencer A. Psychology principles in practice . Austin, 2003. 4. Textbook. Tourette Syndrome Fact Sheet. Ed. Office of Communications and Public Liaison. 5 October 2005. 18 November 2016. . Tourettes Syndrome. Ed. Sussex Publishers. 1 july 2016. Sussex Publishers. 1 december 2016. . s/tourette-syndrome-fact-sheet/>. http://www.mayoclinic.org/diseases-conditions/tourette-syndrome/symptoms-causes/dxc-20163624 tourettes and the biological theory

Monday, August 19, 2019

Cameron Crowes Film Jerry Maguire Essay examples -- Crowe Movie Film

Cameron Crowe's Film Jerry Maguire In his movie Jerry Maguire, director Cameron Crowe illustrates how failures and successes are all part of life and if you have love and are happy with your life then you will surely succeed. It is part of life to experience failure which propels one forward to take risks and make changes to find the answers on how to succeed in lives little games. Jerry Maguire is an inspiring movie based on this theme, demonstrating success and failure with business endeavors, love relationships, friendships and self realization. Relationships between characters in this movie were numerous and were very intense. The relationship between Jerry and Rod Tidwell was initially one of strong control exhibited by Tidwell when he asks Jerry to yell â€Å"show me the money!† and when he refuses to complete the camel car commercial. This in turn adds to Tidwells failure with company endorsements and extra cash. Jerry also tries to exhibit control over Tidwell because he expects him to act in a certain way which he doesn’t always do. Jerry proves this when he tells Tidwell the truth about his arrogant actions towards society and the team. Jerry knows it is in Tidwells best interest to tone down his arrogance in order to succeed which he does. In the end both men come to realize their faults and change their behavior which results in the success of Tidwells career. The other relationship that drive...

Sunday, August 18, 2019

Nuclear Energy :: Environment Science Essays

Nuclear Energy Everything in life must have a beginning. It is a scientific fact. This is the same as Nuclear Energy. Nuclear energy wasn’t just discovered, it was created. Nuclear energy is the energy released by a nuclear reaction, especially by fission or fusion. From its first controlled chain reaction to be waste disposal problems, nuclear energy has made major steps. Nuclear energy began in Chicago at Stagg Field. The company that was responsible for this was Enrico Fermi. Here the company was able to create the first controlled chain reaction. The first reactors were based on natural uranium as the nuclear fuel, graphite as the moderator and water as the coolant (Prasar). This opened the floodgates for al nuclear energy. Now that we know to control the dangerous energy, we can use it without fearing drastic measures. In 1955 underwater combat was changed forever. The first submarine, The USS Nautilus, was fueled on nuclear power. The nautilus broke all submarine records for underwater speed and endurance. It was launched in Thames River after Mamie Eisenhower smashed a bottle of champagne across the bow. Due to running on nuclear energy, this made the Nautilus able to travel for great distances at a top speed of 25 knots or more. This made the submarine a much more potent fighting craft and placed the USA a step ahead of all other countries in underwater war (Norris). Even though it was a remarkable task during the year, the navy only expected even greater submarines to come in the future. Admiral Robert B. Carney, Chief of Naval Operations commented that, â€Å"as remarkable as this development seems to us now, the Nautilus will probably appear to our sons and grandsons as a quaint old piece of machinery which introduced the transition to a new age of power† (Norris). As the saying goes, â€Å"You must take the good with the bad† this certainly applies with Nuclear energy. In 1957 began the first of the accidents. Over in Liverpool, England a fire erupted in a graphite-cooled reactor. This caused a 200 square mile area to become contaminated.

Julius Caesar Essay: Superstition in Julius Caesar :: Julius Caesar Essays

Julius Caesar: Superstition In the play of Julius Caesar, we see a brief picture of Roman life during the time of the First Triumvirate. In this snap shot, we see many unfortunate things. Shakespeare gives us the idea that many people try to circumvent what the future holds, such as unfortunate things, by being superstitious. Superstition seems to play a role in the basic daily life of most Roman citizens. The setting of the first scene is based upon superstition, the Feast of Lupercal. This feast is in honor of the god Pan, the queen of fertility. During this time, infertile females are supposed to be able to procreate, and fertile ones are supposed to be able to bear more. It is also a supposed time of sexual glorification and happiness. Other scenes depict how throughout Rome, roaming the streets are mysterious sooth-sayers, who are supposedly given the power to predict the future. Dictating what is to come through terse tidbits, these people may also be looked upon as superstitious. In the opening scene, one sooth-sayer, old in his years, warns Caesar to "Beware the Ides of March," an admonition of Caesar's impending death. Although sooth-sayers are looked upon by many as insane out of touch lower classmen, a good deal of them, obviously including the sayer Caesar encountered, are indeed right on the mark. Since they lack any formal office or shop, and they predict forthcomings wi thout fee, one can see quite easily why citizens would distrust their predictions. Superstition, in general elements such as the Feast of Lupercal, as well as on a personal level such as with the sooth-sayers, is an important factor in determining the events and the outcome of Julius Caesar, a significant force throughout the entire course of the play. As the play develops we see a few of signs of Caesar's tragic end. Aside from the sooth-sayer's warning, we also see another sign during Caesar's visit with the Augerers, the latter day "psychics". They find "No heart in the beast", which they interpret as advice to Caesar that he should remain at home. Ceasar brushes it off and thinks of it as a rebuke from the gods, meaning that he is a coward if he does not go out, and so he dismisses the wise advice as hearsay. However, the next morning, his wife Calphurnia wakes up frightened due to a horrible nightmare.

Saturday, August 17, 2019

Communication Technologies Essay

In this assignment I am going to describes different types of communication devices. For example Switches, Routers, etc. Then I am going to explain the principles of signal theory. After this I will look at difference methods of electronic communication and transmission used. Communication Devices Switches – These are mainly used for local area networks (LAN). The reason behind this is that they can be used to bridge a lot of computers together. They do look like hubs but they can vary in speed. They are more intelligent due to the fact that they can send out packets from a set port. There is advantage of using a network switch, they can be used with an Ethernet cable or a fibre optic cable and they still will work perfectly fine. When connecting a router or a server in an LAN or WAN network it is slightly easier because you would just need one cable which would mostly need to a fibre optic cable, so you can get the maximum rate of transfer speed. Routers – They are mainly used for connecting one network to another. They are meant for handling information and forwarding to another network connected to the router. You can either connect using wireless or a cable. Normally an Ethernet cable is used to connect the computer networks. Hubs – Also known as a concentrator or a multiport repeater. Used in a star or a hierarchical network setup to connect the station or the cable segments. There are two main types of hubs: passive & active. Active takes the incoming traffic, amplifies the signal and then forwards it all the ports. In a passive hub it simply divides the incoming traffic and forwards it. A hub can be used to manage and allow individual port configuration and traffic. Hubs operate on the physical layer of the OSI model and they are protocol transparent. This means that they do have the ability to set upper layer protocols such as IP, IPX or a MAC addresses. Hubs just extend them do not control the broadcast or collision domains. Bridges – Used to increase the performance of a network by dividing it into separate collision domains. Even though they are more intelligent than hubs due to the fact that they operate at the Data Link layer of the OSI model, they still are not able to control the upper layer protocols. On a separate segment they store the MAC addressing table of all nodes. Basically it takes the incoming frames and checks the destined MAC address and lookups it up against the store MAC addressing table and decides what to do. If the frame is comes from the same port as the destined port than it simply discards the frame. If the destined location is not known than it will be flooded throughout the outgoing ports and segments. Repeaters -One of the less complex hardware of the networking world, because it basically runs at the physical layer of an OSI model, so it is not aware of the frame formats and upper layer protocols. Repeater basically is used to expand a LAN network over large distance regenerating a signal. When using a repeater remember the 5-4-3 rule which means that a maximum distance between two hosts on the same network is 5. Use only maximum of 4 repeaters in a network and only 3 segments can be populated. Gateways – Very intelligent devices, they work at the Transport Layer protocol. This is higher than the Upper Layer protocol. This means that can manage and control IP, IPX and MAC addressing. They allow IPX/SPX clients to IP/TCP uplink network to connect to the internet. A gateway in simple terms is like a post office. All the information is sent to it and then as a post office knows the number of houses in the area in the same way a gateway would know all the ports and direct it there. Cell Phones – It is a piece of device which is used by a lot of people. It is a portable version and more advanced version of a normal home phone. It lets you voice calling, text messaging, the some other advanced phones even allow video calling and internet browsing. Cell phone is a full duplex device therefore you can connect it to your computer and use it as a modem even though it would be very slow. The newer released phones are somewhere near capable of the proper modem speeds. DCE & DTE devices – Data Communication Equipment (DCE) is basically equipment which allows communication with a Data Terminal Equipment (DTE). In another words DTE ends the communication line and a DCE provides the path of communication. An example of an DCE is an modem and a computer is a DTE. Fax Machine – A device which allows you to send paper copies using PSTN lines to other people. It can also be used to send memos and other information as well. It uses the phone line to transmit the data that is sent. A fax machine has a sensor to read the data and the end of it. It will encode the black and white that it picks on the paper and moves it to the receiving end. It will compress the data before transmitting it. As soon as it receives the data it decodes and decompresses the data so it can arrange it in the way that it scanned it from the original document. There are a lot of things in a fax machine that allow it to do its function. It consists of a source projecting a light beam, a rotating cylinder and a photo electric cell. It also has paper feed like a printer. E-Mail – Email also which is the short form for electronic mail. You can use e-mail to stay in contact with your friend/family even colleagues. It does it by finding out the person you want to email he’s email address and then you will send him an email and click send and then that person will receive it. Signal Theory When talking about signal theory data is represented by digital format which is dependent on binary or base 2 principles. Analogue and digital frequencies are used for transmitting signals along a medium link. Analogue records the waveform as they are. Digital on the other hand turns the analogue signals normally to sets of number. Analogue signals can have varying amplitude and frequency. Amplitude analyzes the loudness of the signal and Frequency determines the pitch of the signal. â€Å"Pitch† mostly used to refer to low and high notes. If the frequency is lowered than you get a low note and if the opposite is applied than you get a high note. Bit is a binary digit which represents value of 0 which is normally off and 1 which is normally on. Bit can also be referred to as a electrical pulse which is generated by the inner clock in the control unit or data register. Bit can also be used for digital electronics which is another system that uses digital signals. Manipulation o f a bit within the memory of a computer can be kept in a steady level on a storage device as a magnetic tape or disc. Byte which is made up of 8 bits is a unit measurement used for information stored on a computer. Synchronous & Asynchronous Communication To sum up synchronous communication it is when interaction with data takes place it is done in real time. On the other hand asynchronous or delayed communication is when any data which is archived or stored and accessed later. It is important to choose the most effective delivery mode because it directly impacts the level of interaction that is going to take place. Synchronous It does not use start or stop bits but instead it synchronizes the transmission speed with receiving and sending end of transmission using the clock signals specifically built for each of the components. After this constant streams of data are transmitted between two sources. Because no start or stop bits are involved data transmission is faster, but more problems occur because if latency takes into effect then the synchronization clock will be out of timing therefore the receiving node will get the wrong timings that have been acknowledged in the protocol sending and receiving data. If this happens then data can be corrupted, missing or even wrong message. There are ways around this which take time. You could use check digits and re-synchronize the clocks so that you can verify that the transmission has been successful and has not been interrupted. Advantages of using synchronous transmission are that lower overhead and more data can be transmitted and data transmission rates are al so faster. The drawbacks of using synchronous transmission is obviously more prone to problems, it is more expensive and more complex. Asynchronous Opposite to synchronous it uses start and stop bits to mark the start and end of a transmission, this means that 8 bit ASCII characters would be transmitted using 10 bits because the use of start and stop bits. For example (1)10111111(0) the bracketed out one and zero at the start and end mark the start and end of a transmission. This tells the receiving either the first character is transmitting or finished transmitting. This method of transmission is normally used when data is sent occasionally as opposed to in solid stream. Benefits of using asynchronous data is that it works out cheaper because timing is not that important and it is also simple because both end do not require synchronization. Drawback are that if a large amount of data is to be transmitted it would take a long time this is because a lot of bits are only for control uses they do not contain any useful information. Bandwidth is used to define how much volume a medium can transmit. Basically it is the maximum rate at which data can be transmitted across mediums. The more bandwidth a wire can handle the higher transmission rates can be achieved. It can also high transmission rates for multiple users. But there are restrictions in place such as if a user has been transmitting a lot of data between a period of time then a temporary limit will be put on. This is quite common with ISP’s. To stop this happening to you best thing to do is not to download a lot at the same time and also close programs which use the bandwidth continuously. Radio Transmission Radio is a way of transmitting signals using varied tones which convey a message of electromagnetic waves with a frequency. Electromagnetic radiation travels in direction of oscillating electromagnetic fields which go through the air and vacuum of space. Changes in radiated waves such as amplitude, frequency or phase allow information to be carried systematically. If the radio waves pass through electrical conductors the oscillating fields induce an alternating current in the conductor. This could be detected and changed into sound or any other type of signal which is able to carry information. Every radio system has a inbuilt transmitter this is the source which allows electrical energy that produces a alternating current of a desired frequency of oscillation. The inbuilt transmitter also has a system which changes some properties of the energy produced to impress a signal on it. This change could be as simple as turning the signal on or off. Change could be more complex such as alternating more subtle properties such as amplitude, frequency, phase or combination of all three properties. The modulated electrical energy is sent via the transmitter to an antenna. Antenna changes the alternating current electromagnetic waves; this allows the waves to transmit in the air. There are drawbacks of using radio. First is attenuation can happen, this basically means the longer the wave has travelling to get to its destination the more weaker it gets. Most obvious example of this would be someone listening to FM radio in the midlands; the further he goes away from the midlands the more signal gets weaker. Microwave An electromagnetic based wave which has a range or wavelength of up to 30 GHz. Currently microwaves are getting more popular due to advancing technologies. Microwave offers high bandwidth at low cost. Most common problem with microwave transmission is reflection. Microwaves are common used for radar which pickup planes and helicopters flying in the air. Microwaves will hit the plane or helicopter and reflect back and gets calculated giving the position of the flying object. Waves are reflected due to a barrier which stops the wave from going further so it hits the barrier and reflects back. Reflection affects the signal if the reflection is not good then the reflection won’t happen therefore a dead or a blank signal will be received. To minimize the effect try staying close the satellite. Wireless protocols such as Bluetooth use microwaves to transmit. Satellite Satellite is a orbiting piece of hardware which has been left floating in the air from big companies like Microsoft, satellite can be used for communication. There are also other types of satellites which are used for spying or used for online maps such as Google maps, Microsoft live maps or another services. Satellites provide high bandwidth solutions. Satellite is categorized as a WAN because it uses high speed & long distance communication technology which allows them to connect to computers. Attenuation also affects satellite connection due to the same reason. If a satellite is not in the required position and starts transmitting signal it will not reach television so they might not work properly or correctly. Satellite dish has to be in the same direction as the satellite. Satellite signals reach television using a transmission antenna which is located at an uplink facility. The facility has an uplink satellite dish which would be around 9-12 meters in diameter. The bigger the diameter of the satellite the more accurate signals and better signal strength from the satellite is received. The satellite dish would be pointed towards the satellite and the uplinked signal is received by the transponder at a certain frequency. This frequency is normally C-band (4-8 GHz) or KU-band (12-18 GHz). The transponder then retransmits the signal back to the earth. NTSC, PAL or SECAM are three broadcast standards used through out the world. NTSC is normally used in the US, Canada, Japan, Mexico, Philippines, South Korea and other countries. PAL which stands for Phase Alternating Line is an colour encoding system which is used by over 120 countries in the world. In a few years time most of the countries will stop using PAL and either change to DVB-T SECAM It is sequential colour with memory is the analogue colour television system. SECAM was Europe’s first colour t elevision standard and France currently uses it. The analogue signals for the three broadcasting types are transmitted via a satellite link scramble or unscramble. The analogue signal is a frequency modulated and transformed for a FM to something called baseband. The baseband fuses the audio and video sub carrier. The audio sub is further demodulated to provide a raw audio signal. Digital TV’s that transmit via satellites are normally based on open standards such as MPEG and DVB-S. MPEG which stands for Moving Pictures Experts Group is a compressed format which code moving pictures and associated audio information. There is also MPEG 2 which is a digital television signal which is broadcasted via terrestrial cable and direct broadcast satellite TV systems. DVB-S which stands for Digital Video Broadcasting is a standard for satellite TV’s which forward error coding and modulation. It is used by every single satellite that serves a continent. Standards Organizations There are difference types of standard organizations. These are the various types of standard organizations TIA/EIA, RS-232, IEEE, ISO, OSI and Manchester Encoding. ISO/OSI – International Standards Organization’s Open System Interconnect (ISO/OSI) is the standard model for networking protocols and distributed applications. ISO/OSI defines seven network layers. 1. Physical 2. Data Link 3. Network 4. Transport 5. Session 6. Presentation 7. Application I will be only be explaining in-depth the first network layer: Physical. This layer defines what cable or physical medium to be used. There are lots of different types of cable thinnet, thicknet, TPC, UTP. All of these mediums are functionally the same. The major difference between the various cables is the cost, convenience, installation and maintenance. Converters from one media to another operate at this level. TIA/EIA – Telecommunications Industry Association & Electronics Industries Alliance (TIA/EIA), state the standards which should be used laying cables in a building or a campus. TIA/EIA describes how a hierarchical topology should be laid out. A system where a main cross connect system is used and connected using a star topology using a backbone cabling through a intermediate or a horizontal cross connect. This type of cabling or similar is also used for laying out telecommunication cables. The backbone cabling method will be used to connect the entrance facilities to the main cross connect. In areas such as office a horizontal cross connect for the consolidation of the horizontal cabling, which extends into a star topology. Maximum stated horizontal cable distance should anywhere between 70M-90M. This applies to TTP (Twisted Pair Cable), but the fibre optic horizontal cabling has a set limit of 90M. IEEE – Institute of Electrical and Electronics Engineers allows the development of â€Å"Electro Technology† which in other words applies to electricity applied to technology. Societies like the IEEE Computer Society are subsidiaries of the IEEE itself. This standards organization also publishes journals. Devices such as digital camera need set amount of bandwidth speed so it uses a IEEE plug. Any device that uses the IEEE standard uses a twisted pair cable. Signalling Standards NRZ- It stands for Non Return to Zero. It is a binary code normally used for slow speed synchronous and asynchronous transmission interfaces. Ones is represented as a small voltage and zero is negative voltages. They are transmitted by either by set or constant DC voltages. It also uses additional synchronisation so it dose not lose any bits in the process. NRZ-L – Non Return to Zero Level is similar to NRZ, but it not a binary code. Same as NRZ one is represented as small voltage, but zero is also represented as a small voltage but it is not as big voltage as one, Therefore it allows more data to be send without a lot of signal change. NRZ-M – Non Return to Zero Mark again similar to NRZ, but one is actually represented by a change in physical state and zero is represented as change in physical state. This basically means that there is no voltage when there is no change in physical state. RS-232 – This standard applies to serial data transfer such as the 9 pin serial connecters which are commonly used on a computer motherboard. The data is sent in as time series of bits. Synchronous & Asynchronous is both supported by this standard. This standard and states the number of control circuits that can be or need to be used to connect the DCE & DTE terminal with one another. Data and control circuits which are signalled from a DTE connected to a DCE or vice versa will always flow and operate in one direction this is called half duplex. Only full duplex allows data to be sent and received in both directions at the same time. Manchester Encoding – Data bits which are represented by transitions from a logical state another is called Manchester encoding. This is a digital type encoding. In this encoding the signal is self clocking because the length of every data bit is set by default. Depending on the transition direction the state of the bit can be analyzed. In Manchester encoding the signal synchronizes itself. This is an advantage because this will decrease the error rate and optimize the reliability. But on the other hand it is also a disadvantage because the amount of bits sent in the original signal when transmitted has be twice the amount of bits from the original signal. Differential Manchester – Also known as Conditioned Diphase (CDP). It is a encoding method which uses data and clock signal as fused to create a self-synchronizing data steam. Similar to Manchester encoding it uses present or absent transitions to represent logical value. TTL – Transistor Transistor Logic is a binary code which either uses high voltages between 2.2V and 5V to represent one and no voltages between 0V and 0.8V to represent a zero Name Specification Maximum Length/Speed Advantages Disadvantages Cat.5 Cable RJ45 Connector. Made from Copper, PVC, Plastic Length:100 M Speed:100 Mbit/s Cheapest type of cable, Mostly unshielded & more prone to electrical noise. Cat.6 Cable RJ45 Connector. Made from Copper, PVC, Plastic Length:100 M Speed: 10 Gbit/s Very fast transmission. Unshielded & more expensive than Cat.5 Cat.7 Cable RJ45 Connector. Made from Copper, PVC, Plastic Length:100 M Speed:100 Gbit/s Extremely fast and less interference Incredibly expensive and mostly likely be manufactured till 2013 Bluetooth Mostly Copper for the circuitry, Plastic for casing. Length: 100 M Speed:1 Gbit/s Send from cell phones, do not need wires to connect, Most of the phone are equipped with it Open and other people can access your phone if not protected. Quite slow when sending and receiving on a cell phone. Infrared Receiver, Antenna and Transmitter, Copper & Plastic Length: 40 km Speed: 4 Mbit/s Shorter wave than microwaves, not as harmful. Less interference. Microwaves Antenna & Receiver. Length: 1 M Speed: 300 GHz Good for sending data over longer distances Dangerous, if something that uses microwaves e.g. cell phone for too long. Too much interference Wi-Fi Wi-Fi Signal Transmitter. Length: 95 M Speed: 5 GHz Usable anywhere in the house, can even be used as a hotspot in public places e.g. airports, cafà ¯Ã‚ ¿Ã‚ ½, etc Other people can also access it so many connections can make it slow. Satellite Dish and a satellite in orbit Length: 22,000 Miles Speed: 40 Mbit/s Connection from anywhere in the world. Delay of up to 500 millisecond due to rain or moisture Fibre Optic LED/Laser Connector. Glass, Plastic, PVC. Length: 40,000 Speed: 10 Gbit/s Extremely fast speeds can be achieved without the use of switches, hubs, etc over long distances Simply expensive to buy. Radio Transmitter & Receiver Length: 100 Miles Speed: 300 GHz Available anywhere in the world. Very limited range and gets easily affected by interference. Name Specification Advantages Disadvantages Coaxial Diameter: 6 mm / Resistance: 85.2 km Capacity: 70 km at 1KHz Cheap. 500 meters length. It very reliable Expensive and hard to install Fibre Optic Diameter: 2 microns. Good for transmission over long distance because it is immune to magnetic interference. Electrical interference protected. Stretches up to 3000 meters. No noise is generated Very expensive and over time the sent signal will get weaker because of signal reflecting. UTP/Cat.5 Foiled and uses copper wire. Stretches up to 100 meters. Easy installation and transmission rates reaching up to 1 Gbps. Very open to interferences. STP Foiled and also uses copper wire. Shielded. Length up to 100 meters. Transmission rates between 10-100 Mbps Expensive heavy and big in physical size. Radio Uses antenna to transmit. Signal can be refracted. No wires needed and very long distances capable. Signal gets weaker the more time taken to reach the destination. Security is a problem very open for hackers. LAN This is type of network which covers a small office, home or a school network. A LAN uses either wired Ethernet or wireless RF technology. Using a LAN can be much easier when there is a printer available or sharing a file throughout the network. Updating software is much easier because updating software will automatically update all the other software’s. LAN has much higher transmission rates because it is wired connection rather than wireless. Ethernet and Wi-Fi as most widely used technologies, however many others such as token rings have been used before. This relates to standard IEEE 802.2. This standard allows two connectionless and one connection orientated operational mode: Type 1 which allows frames to be sent to a single destination or multiple destinations on the same network is a connectionless mode. Type 2 is the oriented operational connection mode. In this mode it uses something called sequence numbering which makes sure that when the data is send it gets to the destination in the correct order and not a single frame has been lost. Type 3 which is also a connectionless service, but only support point to point communication. Infrared is related to this service because in computer infrared network it can receive and transmit data either through the side of the device or the rear side of the device. When connections are made using Microsoft Windows Infrared the same method used for LAN connections can be used. Infrared technology has been extended to allow more than two computers to be connected semi permanent networks. The advantage of a LAN is that the same physical communication path can be shared by multiple devices. For example it there is a printer, a computer and the internet connection the LAN will allow connections to the printer and it will also allow connections to the internet. If a software is loaded onto the file server that all the computers on the network can use it. There are quite a few drawbacks of a LAN network. For example security measures need to be taken so that users cannot access unauthorised areas. It is quite hard to setup the network. Skilled technicians are needed to maintain the network. Yet the biggest disadvantage is that if the file server goes down than all the other computers on the network are affected as well. WAN This type of network covers a wider area. It is used over high speed, long distance communications such as computers in two different areas. A WAN can also be shared. For example two occupants in two buildings can share the wireless connection to a third person, or a business or anyone or anything they wish to do so. Data is safe, secure and quick when it is transmitted between two computers. WAN can also be used to connect different types of networks together for example a WAN network connected to a LAN networks. The reason behind this is that it is AppleTalk. It is a cheap LAN architecture which is a standard model built for all Apple Macintosh computers and laser printers. It also supports Apple LocalTalk cabling scheme as well as Ethernet and IBM token ring. AppleTalk can connect to standard computers which do not have AppleTalk. This all relates to FDDI standard which stands for Fibre Distributed Data Interface. It is a backbone of a wide area network. It uses fibre optic cable to transmit data up to supported rate of 100 Mbps. An advantage of a WAN it allows secure and fast transmission between two computers. Data transmission is inexpensive and reliable. Sharing a connection is easy as well because it allows direct connectivity. A WAN also allows sharing of software and resources to other workstations connected on the network. Disadvantage of a WAN network is that the signal strong all the time so anyone trying to hitchhike a connection can use the WAN connection it is not protected. WAN are slow and expensive to set-up. They also need a good firewall to stop intruders using the connection. Networking Mediums Different types of medium are used for different types of topologies. Coaxial Cable It is normally used to connect telecommunication devices which used for broadband connection which use high transmission rates to transfer data. The cable is insulated using a braided shield which is also known as a screen. It protects the cable from electromagnetic interference. It has higher capacity than a standard copper wire. Therefore it allows radio frequencies and television signals to be transmitted. Various types of coaxial cables are available which can be used for thin Ethernet which are used for networking 10Mbps connections lengthening up to 200 meters. There is also thick Ethernet cable is also used for 10Mbps connections but stretching up to 500 meters. Unshielded Twisted Pair (UTP) coaxial cable has been used in the past when building networking using thick or thin Ethernet. Ethernet cables quite expensive but they are still used because they carry more data then a telephone wire and it is less susceptible to interference. Optical Fibre Optical fibre also known as a fibre optic cable uses light to transmit data. Light is made using a laser or LED is sent down a fibre which is thin strand of glass. Fibre optic is about 2 microns in diameter which is 15 times thinner than a single human hair. Optic fibre is not affected by electromagnetic interference. It is cable of higher than data transmission rates, ideal for broadband usage. Fibre optics are manufactured in two different types the single mode and multi mode. The difference between the two is quite obvious single mode uses one beam of light to transmit data to longer distance of around 3 km but the multimode uses multiple beams of light to transmit data but only to shorter distance of 2 km. This allows more data to be sent simultaneously. It is normally used for broadband transmission as mentioned before because it is faster at transmission than any other cable currently available. Fibre optic also has an advantage of long distance transmission because light propa gates through the fibre with little attenuation compared to electric cables. Not many repeaters are needed for long distance. Data travelling using the fibre can reach rates of unto 111 Gbps. Fibre optics also restrict high voltages travelling from end to end of a fibre to another end. It also restricts cross talk and environmental noise between signals transmitting to different cables. UTP (Unshielded Twisted Pair) & STP (Shielded Twisted Pair) UTP and STP both use copper wires which are known to man as one of the oldest types of transmission media methods. STP is insulated with a metallic plastic foil which is all under the plastic sheath. This insulating is expensive to make that’s why it is more expensive than normal cable. Even though STP cable is shielded there is still crosstalk. It cannot be eliminated. Both UTP & STP individual wires are twisted together so it creates there is less crosstalk. Core of each of the type of cable is a very good conductor and easy to work with. Media which can be used with UTP is internet, because it is easy to install, maintain, less expensive and allows higher transmission rates. The media used with STP is also internet, but it is more expensive and difficult to install. The advantage is that there is less interference. It is difficult to install because it has to be grounded at both ends. Improper grounding will result I metallic shielding acting as a antenna and it will pick u p unwanted signals. Due to the cost and difficult to install and maintain it is hardly used in Ethernet networks. It is mainly used in Europe. Crosstalk Crosstalk means that signal that are transmitting in different circuits interfering with each other. Crosstalk happens because unwanted signals interfere with another channel transmitting undesirably. Electrical Noise Noise is when an electrical signal is transmitted across a wire which is not the sent signal by the user, but it is another signal which has been picked up randomly. Twisted pair cables eliminate the interference because they are twisted with each other so they cancel out each other. The thickness and varied insulation of a cable and its capacitance of the wires will cause noise. For example when there is communication on a telephone and either person cannot hear the message clearly this is caused by noise affecting the signal. This is known as crosstalk, as mentioned before crosstalk is when signal is affected by electromagnetic field around a wire. Electric noise cannot be eliminated but can be minimized by taking caution. Keep cables away from electrical equipments and shield the cable weather it is a fibre optic or a STP. Checksum It is a method used for error-checking the received data against a calculated checksum. For example when a data is received by the designated node the checksum error detection method will create a new calculation and check it against the old calculation to check weather the same result it received. This makes sure the data has not been altered in any way when it was transmitting. The checking of the data is called checksum function or checksum algorithm. Cyclic Redundancy Check (CRC) CRC is another type of error-checking technique used in data communication. A CRC character is generated at the end of the transmission. The produced CRC character’s value depends on the hexadecimal value of ones in the data block. The node receiving the transmission makes a similar calculation and compares it to the source node and it the values are different then it asks for retransmission of the data. Frame It is a collection of bits sent over a medium. It contains physical address and control information. It also contains error-detection methods like CRC. The size and role of the frame is all dependent on the type of protocol, which is often used synonymously with packet. When the data is split and if necessary it is sent to Ethernet frame. The size of the Ethernet frame varies between 64 and 1,518 bytes. It uses the IEEE 802.3. It contains address, length, data and error checking utility. The data is passed onto the lower level components corresponding to OSI’s Physical layer. This converts the frames into bit-stream and sends it over the transmission medium. Other network adapters on the Ethernet receive the frame and analyze it for the destination address. If the destination address match of the network adapter, the adapter software simply processes the incoming frame and the passes the data to higher levels of the protocol stack. Packets It is a unit of data sent across networks. When a computer transmits data it will split or divide it into packets when it reaches the defined node it is transformed into the original transmission. Also known as a datagram it contains two parts. The header which acts as a envelope and the other part is the payload which is contents. Any message sent over the size of 1,500 bytes is fragmented into packet for transmission. When packet filter is put into place the header of incoming and outgoing packet is analyzed and decides whether to let them pass or restrict the packet, this is decided based on network rules. Data Transmission Modes & Methods There are three different ways transmission can take place. Data transmits either using simplex, half duplex or full duplex mode. There are two different ways how data is transmitted. Data can be transmitted via serial or via parallel transmission. Simplex Simplex data can only travel in one direction. Television and radio broadcasting are example of simplex. Fibre optic works with simplex mode. Simplex is good for satellite communications. TV signal is the proof that satellite communication is good because the transmission is good and clear. Simplex is rarely used for computer based telecommunications. Half Duplex Half duplex data can travel both ways but only in one direction. Coaxial cable works in half duplex mode. Radio is a example of half duplex because the signal reaches the destination and comes back to the original source. Communication between networks also works at half duplex. If a node is transmitting a message and another node wants to transmit it has to wait till the token ring comes back. Full Duplex In full duplex data can travel in both directions and at the same time. The bandwidth is divided in both directions. UTP/STP mediums work at full duplex. For example Bluetooth is a full duplex because data can be received and sent by both devices. Another example is landline telephone because both end of the phone can speak and listen to each other. Serial Transmission In serial transmission one bit is sent at a time. It is good for communication between several participants. Serial transmission is slow. When the data is sent it is dissembled by the source and reassembled by the receiver. Parallel Transmission Parallel transmission is when every bit is sent simultaneously but using separate wire. Basically when data is parallel channel transmitted for i.e. 8 bits or a byte everything is sent simultaneously therefore it’s faster than serial because serial channel would send the 8 bits or byte one by one. Most common example is communication between a printer and the computer.