Thursday, August 27, 2020

Higher Education Institutions Essay

Advanced education Institutions need to ceaselessly adjust to address the issues of understudies, bosses, and society when all is said in done. So as to address these issues, the hypothetical information and expertise based parts required by graduates when entering the workforce must be continually surveyed. As a result, scholastics face consistent difficulties in creating inventive showing rehearses, exercises advancing ability improvement, and evaluation errands which will furnish graduates with the essential employability aptitudes for their calling, for deep rooted learning, and for self-advancement. The utilization of intelligent practice as a vital segment of undergrad appraisal and ability advancement is seen over an assortment of scholarly teaches. Intelligent practice is regularly utilized over these orders to connect with understudies in the improvement of self reflection aptitudes and capacities, and in contextualizing the connections between hypothetical information and expert experience and results. Intelligent practice can be characterized as a learning procedure including the assessment of individual basic episodes and practices, the deconstruction of one experiences’ considering information held, and the resultant development of new information which can be applied to individual or expert practice (Davis, 2003; Klenowski and Carnell, 2006; Murphy, Halton, and Dempsey, 2008; Pedro, 2005). Intelligent practice is self-managed, and draws in the student in a procedure of relating hypothesis and practice (Kuiper and Pesut, 2004; Lesnick, 2005; Pavlovich, 2007). Exploration has shown that the advancement of intelligent practice aptitudes, and the commitment of advanced education understudies in this procedure takes into account the assessment of how close to home encounters brief learning, and how this learning identifies with proficient experience (Bates, 2008). The way toward taking part in intelligent practice permits the improvement of individual and expert aptitudes which lead to a coordination of individual information and experience, scholastic hypothesis and information, and significant expert experience (Donaghy and Morss, 2007; McMullan, 2006; Thorpe, 2004). Also, this recently coordinated and shaped information, through guided help or management, would then be able to be applied to future expert results and encounters (O’Halloran, Hean, Humphris, and Macleod-Clark, 2006). Late exploration features the requirement for intelligent practice procedures to be contextualized to the post-graduation business segment understudies are getting ready to rehearse inside (Boud and Falchikov, 2006; Lesnick, 2005; Pedro, 2005). Accordingly, it is suggested that understudy evaluation be founded on understudies building up the abilities for work which are required past the college experience. Incorporated inside the advancement of these aptitudes, is the requirement for understudies to consider and pass judgment on their learning encounters and accomplishments, decide how satisfactory their presentation has been, fundamentally participate in a procedure of selfreflection, and assess their exhibition (Hinett and Weeden, 2000; McMullan, 2006). This procedure started and created through the educating of and commitment in intelligent practice, drives understudies to self-direct their learning, and work-errands, distinguish and be inspired to create zones for change, and ad vances further learning. Henceforth, learning is contextualized through intelligent practice (Boud and Falchikov, 2006; Lesnick, 2005; Pedro, 2005). It is essential to note intelligent practice isn't just gainful to students, yet in addition to college coaches, teachers, and course organizers (Clegg, Tan, and Saeidi, 2002; Crow and Smith, 2005; Pedro, 2005; Thorpe, 2000). Not at all like that saw in understudies, the procedure of assessors analyzing understudy appraisals containing exposition written as reflection, places scholastics in a situation to scrutinize the apparent needs of understudies, and the educational program being instructed comparative with (I) what modules are being instructed in the educational program, (ii) why explicit educational plan modules are incorporated inside the course schedule, (iii) if the educational plan modules are successful in meeting understudy and working environment needs, (iv) if educational program modules should be changed to all the more likely location the apparent needs and aptitudes of understudies and working environments, and (v) by what means can the recognized educational progra m modules be changed (Pedro, 2005). The investigation of understudy reflections can prompt an assessment of the upsides and downsides of the current course educational plans, and the redevelopment and realignment of this educational programs to upgrade understudy learning and aptitude improvement (Bulpitt and Martin, 2005; Kember, McKay, Sinclair, and Jess Heerde and Berni Murphy, Reflective Practice Annotated Bibliography, 2009 4 | PageWong, 2008). Besides, the transferability of aptitudes from advanced education into the work environment might be improved, and the business open doors for understudies post-graduation expanded through appraisal of understudy reflections (Bulpitt and Martin, 2005; Harris and Bretag, 2003; Pedro, 2005). It was my third semester being an understudy nurture where I posted at Hospital Selayang. The mishaps occurred on Thursday morning. There are around 8 patient on beds and some of them are pulling forces, having an injury dressing on leg and some having removal. Around then, our CI isn't around in light of the fact that she needs to deal with 2 wards. The mishap happen when one of staff nurture appoint me to do straightforward injury dressing. I and one of my individuals arranged all hardware expected to do wound dressing. After we got done with finishing hardware, we quickly go to the patient and start to dressing. In view of ward are excessively occupied, we chose to manage with no direct by staff nurture. During dressing, the sister out of nowhere came and saw us do the dressing without administer by CI. All the more seriously, we neglect to put inco-cushion beneath understanding leg. The sister yelled us to prevent the dressing and leave from the patient. The sister said that we are not permitted to do any method without direct by our CI. At that point she reprimand us for not comply with the rule of wound dressing. I need to do a report about what happened quite recently. In the wake of being reproved by sister and CI, I felt remorseful and can’t stop consider it only a basic injury dressing I had committed an error. Around then, I despite everything censuring myself for the mix-up I have done. I never felt that it will transpire. I was stunned and feel terrified on the grounds that I not doing the system in managed by my own CI and got by sister in ward. This is the first occasion when I heard sister’s voice and looking before me. I got numb, astounded and my brain goes to clear and void. I felt remorseful and I can't quit thinking it each time I went to posting territory. I felt distressing on the grounds that sister was irritated. I will accept this mishap as an incredible exercise to roll out an improvement in me in future. As I broke down and experience the circumstance, I ponder myself how it could be happened and transpired. the first and the central matter are, I felt positive about achieving the injury dressing methodology. This is on the grounds that I felt that the methodology was basic so I comply with the rule of twisted dressing where I neglected to put incontinence cushion and neglected to keep up sterility on my field. Alongside that, in light of absence of information I had additionally caused my dressing strategy to get confounded to deal with. I additionally understood that my degree of information on wound dressing despite everything low. Little encounters are insufficient to ace injury dressing method. Besides, I understood by doing the technique without administered by CI of staff nurture. I committed a major error which isn't followed the job as an understudy nurture. I ought do nothing without administered by CI. Obey CI guidance is an awful conduct that I need to change to be a decent understudy. In my perspective, I feel that the circumstance could change later on is I should accomplish more practice on my injury dressing technique. Right now, I need more information to address sister’s question and in particular, I should be cautious in keeping up sterility with the goal that I won't be chastened by sister. The ramifications of mixed up the injury dressing standard to the establishment are, the understudies from same organization will be dismissed from that medical clinic due to less quality. The understudy medical attendant will experience self low-regard to confront tolerant due to their seniors. In addition, the relatives will lost faith in our work and not permit the understudy to do any system on the patient. Other than that, they won't trusting to my own organization and we need to hand over to staff nurture in-charges. In my suggestion, I could improve my aptitude to change later on by a consistently practice in right method of dressing. The rehashed practice of wound dressing will help me to empower a decent ability on that methodology. My hypotheses are still in lower position. Along these lines, I need to acquire information and data to make it great. The ability will be progressively handy by direct from the CI. She will address any mistake while I am doing any methodology and included with some splendid point. Adjacent to that, I should ensure that all types of gear are finished without missing any device in doing wound dressing. I need to twofold check to limit danger of overlooked hardware. With this, the consequence of being mix-up will be zero. I am not mindful that all medical caretakers ought to being in a community oriented system that encourage joining forces with others. This is on the grounds that the greater part of the attendant today get observation during their direction to a clinical position or instructing for an exceptional task or advancing they are towers them. Next to that, I likewise staying alert that on the off chance that somebody to be progressively impeccable to done an injury dressing. We should require an idea of coaching in nursing and recommend its most grounded relationship is as a ‘teaching learning process for the socialization of medical attendant researchers and researchers and the multiplication of a group of expert knowledge’ (steward and Krueger, 1996). This is on the grounds that an understudy an understudy must need an assistance

Saturday, August 22, 2020

The mesh generation

The work age Depict general techniques (organized, unstructured, half and half, versatile, and so on.) and examine their key highlights and applications A key advance of the limited component technique for numerical calculation is work age. One is given an area, (for example, a polygon or polyhedron; progressively reasonable variants of the issue permit bended space limits) and should segment it into basic â€Å"elements† meeting in very much characterized manners. There ought to be not many components, yet a few parts of the area may require little components with the goal that the calculation is increasingly exact there. All components ought to be â€Å"well shaped† (which implies various things in various circumstances, yet by and large includes limits on the points or angle proportion of the components). One recognizes â€Å"structured† and â€Å"unstructured† networks by the manner in which the components meet; an organized work is one in which the components have the topology of a customary lattice. Organized cross sections are normally simpler to process with (sparing a steady factor in runtime) how ever may require more components or more awful formed components. Unstructured lattices are regularly figured utilizing quadtrees, or by Delaunay triangulation of point sets; anyway there are very changed methodologies for choosing the focuses to be triangulated The least complex calculations legitimately process nodal situation from some given capacity. These calculations are alluded to as logarithmic calculations. A considerable lot of the calculations for the age of organized cross sections are descendents of â€Å"numerical lattice generation† calculations, in which a differential condition is fathomed to decide the nodal arrangement of the network. As a rule, the framework explained is an elliptic framework, so these techniques are regularly alluded to as elliptic strategies. It is troublesome offer general expressions about unstructured work age calculations in light of the fact that the most unmistakable techniques are totally different in nature. The most mainstream group of calculations is those dependent on Delaunay triangulation, however different strategies, for example, quadtree/octree approaches are additionally utilized. Delaunay Methods A significant number of the regularly utilized unstructured work age methods depend on the properties of the Delaunay triangulation and its double, the Voronoi outline. Given a lot of focuses in a plane, a Delaunay triangulation of these focuses is the arrangement of triangles with the end goal that no point is inside the circumcircle of a triangle. The triangulation is exceptional if no three focuses are on a similar line and no four focuses are on a similar circle. A comparable definition holds for higher measurements, with tetrahedral supplanting triangles in 3D. Quadtree/Octree Methods Work adjustment, frequently alluded to as Adaptive Mesh Refinement (AMR), alludes to the alteration of a current work in order to precisely catch stream highlights. By and large, the objective of these changes is to improve goals of stream highlights without exorbitant increment in computational exertion. We will talk about in a word on a portion of the ideas significant in work adjustment. Work adjustment procedures can as a rule be named one of three general sorts: r-refinement, h-refinement, or p-refinement. Blends of these are additionally conceivable, for instance hp-refinement and hr-refinement. We sum up these sorts of refinement beneath. r-refinement is the adjustment of work goals without changing the quantity of hubs or cells present in a work or the network of a work. The expansion in goals is made by moving the matrix focuses into districts of action, which brings about a more noteworthy grouping of focuses in those areas. The development of the hubs can be controlled in different manners. On basic strategy is to regard the work as though it is a versatile strong and tackle a framework conditions (suject to some compelling) that disfigures the first work. Care must be taken, in any case, that no issues because of over the top lattice skewness emerge. h-refinement is the alteration of work goals by changing the work network. Contingent on the procedure utilized, this may not bring about an adjustment in the general number of lattice cells or matrix focuses. The most straightforward system for this sort of refinement partitions cells, while increasingly complex techniques may embed or evacuate hubs (or cells) to change the general work topology. In the development case, each â€Å"parent cell† is partitioned into â€Å"child cells†. The decision of which cells are to be isolated is tended to underneath. For each parent cell, another point is included each face. For 2-D quadrilaterals, another point is included at the cell centroid moreover. On joining these focuses, we get 4 new â€Å"child cells†. In this manner, each quad parent offers ascend to four new offsprings. The upside of such a technique is, that the general work topology continues as before (with the youngster cells replacing the parent cell in the network game plan). The region procedure is comparable for a triangular parent cell, as demonstrated as follows. It is anything but difficult to see that the region procedure increments both the quantity of focuses and the quantity of cells A well known device in Finite Element Modeling (FEM) as opposed to in Finite Volume Modeling (FVM), it accomplishes expanded goals by expanding the request for precision of the polynomial in every component (or cell). In AMR, the selction of â€Å"parent cells† to be partitioned is made based on districts where there is calculable stream action. It is notable that in compressible streams, the significant highlights would incorporate Shocks, Boundary Layers and Shear Layers, Vortex streams, Mach Stem , Expansion fans and such. It can likewise be seen that each element has some â€Å"physical signature† that can be numerically misused. For eg. stuns consistently include a thickness/pressure hop and can be distinguished by their slopes, while limit layers are constantly connected with rotationality and subsequently can be dtected utilizing twist of speed. In compressible streams, the speed uniqueness, which is a proportion of compressiblity is likewise a decent decision for stuns and extensions. These detecting paramters which can show areas of stream where there are movement are alluded to as ERROR INDICATORS and are famous in AMR for CFD. Similarly as refinement is conceivable by ERROR INDICATORS as referenced over, certain different issues additionally expect significance. Mistake Indicators do identify areas for refinement, they don't really tell if the goals is adequate at some random time. Truth be told the issue is extreme for stuns, the littler the cell, the higher the angle and the marker would continue picking the locale, except if a limit esteem is given. Further, numerous clients utilize preservationist esteems while refining an area and by and large end up in refining more than the fundamental segment of the matrix, however not the total space. These refined districts are unneccesary and are in strictest sense, add to unneccesary computational exertion. It is at this crossroads, that dependable and resonable proportion of cell blunder become important to do the procedure of â€Å"coarsening†, which would diminish the above-said superfluous refinement, with a view towards generatin a â€Å"optimal me sh†. The measures are given by sensors alluded to as ERROR ESTIMATORS, writing on which is in abandunce in FEM, however these are uncommon in FVM. Control of the refinement as well as coarsening through the mistake pointers is frequently attempted by utilizing either the arrangement inclination or soultion shape. Subsequently the refinement variable combined with the refinement technique and its constrains all should be viewed as when applying network adjustment A half breed model contains at least two subsurface layers of hexahedral components. Tetrahedral components fill the inside. The change between subsurface hexahedral and inside tetrahedral components is made utilizing degenerate hexahedral (pyramid) components. Top notch pressure results request excellent components, i.e., viewpoint proportions and inner points as near 1:1 and 90â °, individually, as could be expected under the circumstances. Excellent components are especially significant at the surface. To oblige includes inside a segment, the nature of components at the outside of a hexahedral model for the most part endures, e.g., they are slanted. Mating parts, when hub to-hub contact is wanted, can likewise unfavorably influence the models component quality. Much increasingly troublesome is creating a tetrahedral model that contains top notch subsurface components. In a half and half model, the hexahedral components are just influenced by the surface work, so making top notch components is simple. Insignificant exertion is required to change over CAD information into surface networks utilizing the mechanized procedures of genius surf. These surface matrices are perused by professional am. The surface lattice is utilized to expel the subsurface hexahedral components. The thickness of each expelled component is controlled so top notch components are produced. The inside is filled consequently with tetrahedral components. The pyramid components that make the progress are likewise produced consequently. A cross breed model will for the most part contain a lot a larger number of components than an all-hexahedral model in this manner expanding investigation run-time. Be that as it may, the time spared in the model development stage the more work serious stage more than compensates for the expanded run-time. By and large undertaking time is diminished extensively. Additionally, as registering power builds, this â€Å"disadvantage† will inevitably vanish. Hexahedral Meshing ANSYS Meshing gives different strategies to create an unadulterated hex or hex predominant work. Contingent upon the model multifaceted nature, wanted work quality and type, and how much time a client can spend coinciding, a client has an adaptable answer for produce a brisk programmed hex or hex prevailing cross section, or an exceptionally controlled hex work for ideal arrangement productivity and precision. Work Methods: Computerized Sweep fitting Sweepable bodies are naturally identified and fit with hex work whenever the situation allows Edge increase task and side coordinating/mappi

Friday, August 21, 2020

Things Not To Do When Youre Applying To SIPA COLUMBIA UNIVERSITY - SIPA Admissions Blog

Things Not To Do When You’re Applying To SIPA COLUMBIA UNIVERSITY - SIPA Admissions Blog We know that you’re really excited about your application to SIPA. We are too! However, there are a few things when you are applying that we want to strongly caution against. Please, for your sake and ours, heed this advice. It will only help your chances of gaining an acceptance letter from us. Do not send extra materials to us. We know that you are thrilled with your writing portfolio, or that the powerpoint presentation you made to your company went swimmingly. Unfortunately, with the number of applicants we have, there is simply not enough time to go through any additional materials, no matter how riveting they may be. Please refrain from sending anything that we don’t specifically ask for. When we ask for a quantitative resume, please don’t send us another version of your professional resume.   These two documents are meant to provide us with different information. Please ensure that the quantitative information that we ask for is on a separate sheet from the professional information. This attention to detail goes a long way in the admissions process. Make sure you list which school you are applying for and what term.   We want to make sure that we’re accepting you for the right program and for the right semester. Send us three, count ‘em, three letter of recommendations.   No more. No less. We KNOW that you have 15 different people who can sing your praises, but we only need to hear from three of them. No gifts, please!   While we thank you in advance for your thoughtfulness, the SIPA Admissions committee cannot accept gifts of any kind. Following these very simple tips will only help you in the process, and as always, we’re looking forward to reading your applications.

Monday, May 25, 2020

Software Testing - How do we measure the progress of testing - Free Essay Example

Sample details Pages: 27 Words: 8081 Downloads: 2 Date added: 2017/06/26 Category Statistics Essay Did you like this example? Chapter 10 Metrics and Models in Software Testing How do we measure the progress of testing? When do we release the software? Why do we devote more time and resources for testing a particular module? What is the reliability of software at the time of release? Who is responsible for the selection of a poor test suite? How many faults do we expect during testing? How much time and resources are required to test a software? How do we know the effectiveness of test suite? We may keep on framing such questions without much effort? However, finding answers to such questions are not easy and may require significant amount of effort. Software testing metrics may help us to measure and quantify many things which may find some answers to such important questions.. Don’t waste time! Our writers will create an original "Software Testing How do we measure the progress of testing?" essay for you Create order 10.1 Software Metrics What cannot be measured, cannot be controlled is a reality in this world. If we want to control something we should first be able to measure it. Therefore, everything should be measurable. If a thing is not measurable, we should make an effort to make it measurable. The area of measurement is very important in every field and we have mature and establish metrics to quantify various things. However, in software engineering this area of measurement is still in its developing stage and may require significant effort to make it mature, scientific and effective. 10.1.1 Measure, Measurement and Metrics These terms are often used interchangeably. However, we should understand the difference amongst these terms. Pressman explained this clearly as [PRES05]: A measure provides a quantitative indication of the extent, amount, dimension, capacity or size of some attributes of a product or process. Measurement is the act of determining a measure. The metric is a quantitative measure of the degree to which a product or process possesses a given attribute. For example, a measure is the number of failures experienced during testing. Measurement is the way of recording such failures. A software metric may be average number of failures experienced per hour during testing. Fenton [FENT04] has defined measurement as: It is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules. The basic issue is that we want to measure every attribute of an entity. We should have established metrics to do so. However, we are in the process of developing metrics for many attributes of various entities used in software engineering. Software metrics can be defined as [GOOD93]: The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products. Many things are covered in this definition. Software metrics are related to measures which, in turn, involve numbers for quantification, these numbers are used to produce better product and improve its related process. We may like to measure quality attributes such as testability, complexity, reliability, maintainability, efficiency, portability, enhanceability, usability etc for a software. We may also like to measure size, effort, development time and resources for a software. 10.1.2 Applications Software metrics are applicable in all phases of software development life cycle. In software requirements and analysis phase, where output is the SRS document, we may have to estimate the cost, manpower requirement and development time for the software. The customer may like to know cost of the software and development time before signing the contract. As we all know, the SRS document acts as a contract between customer and developer. The readability and effectiveness of SRS document may help to increase the confidence level of the customer and may provide better foundations for designing the product. Some metrics are available for cost and size estimation like COCOMO, Putnam resource allocation model, function point estimation model etc. Some metrics are also available for the SRS document like number of mistakes found during verification, change request frequency, readability etc. In the design phase, we may like to measure stability of a design, coupling amongst modules, cohesion of a module etc. We may also like to measure the amount of data input to a software, processed by the software and also produced by the software. A count of the amount of data input to, processed in, and output from software is called a data structure metric. Many such metrics are available like number of variables, number of operators, number of operands, number of live variables, variable spans, module weakness etc. Some information flow metrics are also popular like FANIN, FAN OUT etc. Use cases may also be used to design metrics like counting actors, counting use cases, counting number of links etc. Some metrics may also be designed for various applications of websites like number of static web pages, number of dynamic web pages, number of internal page links, word count, number of static and dynamic content objects, time taken to search a web page and retrieve the desired information, similarity of web pages etc. Software metrics have number of applications during implementation phase and after the completion of such a phase. Halstead software size measures are applicable after coding like token count, program length, program volume, program level, difficulty, estimation of time and effort, language level etc. Some complexity measures are also popular like cyclomatic complexity, knot count, feature count etc. Software metrics have found good number of applications during testing. One area is the reliability estimation where popular models are Musas basic executio n time model and Logarithmic Poisson execution time model. Jelinski Moranda model [JELI72] is also used for the calculation of reliability. Source code coverage metrics are available that calculate the percentage of source code covered during testing. Test suite effectiveness may also be measured. Number of failures experienced per unit of time, number of paths, number of independent paths, number of du paths, percentage of statement coverage, percentage of branch condition covered are also useful software metrics. Maintenance phase may have many metrics like number of faults reported per year, number of requests for changes per year, percentage of source code modified per year, percentage of obsolete source code per year etc. We may find number of applications of software metrics in every phase of software development life cycle. They provide meaningful and timely information which may help us to take corrective actions as and when required. Effective implementation of metrics may improve the quality of software and may help us to deliver the software in time and within budget. 10.2 Categories of Metrics There are two broad categories of software metrics namely product metrics and process metrics. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, efficiency, reliability, portability, etc. Process metrics describe the effectiveness and quality of the processes that produce the software product. Examples are effort required in the process, time to produce the product, effectiveness of defect removal during development, number of defects found during testing, maturity of the process [AGGA08]. 10.2.1 Product metrics for testing These metrics provide information about the testing status of a software product. The data for such metrics are also generated during testing and may help us to know the quality of the product. Some of the basic metrics are given as: (i) Number of failures experienced in a time interval (ii) Time interval between failures (iii) Cumulative failures experienced upto a specified time (iv) Time of failure (v) Estimated time for testing (vi) Actual testing time With these basic metrics, we may find some additional metrics as given below: (i) (ii) Average time interval between failures (iii) Maximum and minimum failures experienced in any time interval (iv) Average number of failures experienced in time intervals (v) Time remaining to complete the testing. We may design similar metrics to find the indications about the quality of the product. 10.2.2 Process metrics for testing These metrics are developed to monitor the progress of testing, status of design and development of test cases and outcome of test cases after execution. Some of the basic process metrics are given below: (i) Number of test cases designed (ii) Number of test cases executed (iii) Number of test cases passed (iv) Number of test cases failed (v) Test case execution time (vi) Total execution time (vii) Time spent for the development of a test case (viii) Total time spent for the development of all test cases On the basis of above direct measures, we may design following additional metrics which may convert the base metric data into more useful information. (i) % of test cases executed (ii) % of test cases passed (iii) % of test cases failed (iv) Total actual execution time / total estimated execution time (v) Average execution time of a test case These metrics, although simple, may help us to know the progress of testing and may provide meaningful information to the testers and project manager. An effective test plan may force us to capture data and convert it into useful metrics for process and product both. This document also guides the organization for future projects and may also suggest changes in the existing processes in order to produce a good quality maintainable software product. 10.3 Object Oriented Metrics used in Testing Object oriented metrics capture many attributes of a software and some of them are relevant in testing. Measuring structural design attributes of a software system, such as coupling, cohesion or complexity, is a promising approach towards early quality assessments. There are several metrics available in the literature to capture the quality of design and source code. 10.3.1 Coupling Metrics Coupling relations increase complexity, reduce encapsulation, potential reuse, and limit understanding and maintainability. The coupling metrics requires information about attribute usage and method invocations of other classes. These metrics are given in table 10.1. Higher values of coupling metrics indicate that a class under test will require more number of stubs during testing. In addition, each interface will require to be tested thoroughly. Metric Definition Source Coupling between Objects. (CBO) CBO for a class is count of the number of other classes to which it is coupled. [CHID94] Data Abstraction Coupling (DAC) Data Abstraction is a technique of creating new data types suited for an application to be programmed. DAC = number of ADTs defined in a class. [LI93] Message Passing Coupling. (MPC) It counts the number of send statements defined in a class. Response for a Class (RFC) It is defined as set of methods that can be potentially executed in response to a message received by an object of that class. It is given by RFC=|RS|, where RS, the response set of the class, is given by [CHID94] Information flow-based coupling (ICP) The number of methods invoked in a class, weighted by the number of parameters of the methods invoked. [LEE95] Information flow-based inheritance coupling. (IHICP) Same as ICP, but only counts methods invocations of ancestors of classes. Information flow-based non-inheritance coupling (NIHICP) Same as ICP, but only counts methods invocations of classes not related through inheritance. Fan-in Count of modules (classes) that call a given class, plus the number of global data elements. [BINK98] Fan-out Count of modules (classes) called by a given module plus the number of global data elements altered by the module (class). [BINK98] Table 10.1: Coupling Metrics 10.3.3 Inheritance Metrics Inheritance metrics requires information about ancestors and descendants of a class. They also collect information about methods overridden, inherited and added (i.e. neither inherited nor overrided). These metrics are summarized in table 10.3. If a class has more number of children (or sub classes), more amount of testing may be required in testing the methods of that class. More is the depth of inheritance tree, more complex is the design as more number of methods and classes are involved. Thus, we may test all the inherited methods of a class and testing effort well increase accordingly. Metric Definition Sources Number of Children (NOC) The NOC is the number of immediate subclasses of a class in a hierarchy. [CHID94] Depth of Inheritance Tree (DIT) The depth of a class within the inheritance hierarchy is the maximum number of steps from the class node to the root of the tree and is measured by the number of ancestor classes. Number of Parents (NOP) The number of classes that a class directly inherits from (i.e. multiple inheritance). [LORE94] Number of Descendants (NOD) The number of subclasses (both direct and indirectly inherited) of a class. Number of Ancestors (NOA) The number of superclasses (both direct and indirectly inherited) of a class. [TEGA92] Number of Methods Overridden (NMO) When a method in a subclass has the same name and type signature as in its superclass, then the method in the superclass is said to be overridden by the method in the subclass. [LORE94] Number of Methods Inherited (NMI) The number of methods that a class inherits from its super (ancestor) class. Number of Methods Added (NMA) The number of new methods added in a class (neither inherited, nor overriding). Table 10.3: Inheritance Metrics 10.3.4 Size Metrics Size metrics indicate the length of a class in terms of lines of source code and methods used in the class. These metrics are given in table 10.4. If a class has more number of methods with greater complexity, then more number of test cases will be required to test that class. When a class with more number of methods with greater complexity is inherited, it will require more rigorous testing. Similarly, a class with more number of public methods will require thorough testing of public methods as they may be used by other classes. Metric Definition Sources Number of Attributes per Class (NA) It counts the total number of attributes defined in a class. Number of Methods per Class (NM) It counts number of methods defined in a class. Weighted Methods per Class (WMC) The WMC is a count of sum of complexities of all methods in a class. Consider a class K1, with methods M1,.. Mn that are defined in the class. Let C1,.Cn be the complexity of the methods. [CHID94] Number of public methods (PM) It counts number of public methods defined in a class. Number of non-public methods (NPM) It counts number of private methods defined in a class. Lines Of Code (LOC) It counts the lines in the source code. Table 10.4: Size Metrics 10.4 What should we measure during testing? We should measure every thing (if possible) which we want to control and which may help us to find answers to the questions given in the beginning of this chapter. Test metrics may help us to measure the current performance of any project. The collected data may become historical data for future projects. This data is very important because in the absence of historical data, all estimates are just the guesses. Hence, it is essential to record the key information about the current projects. Test metrics may become an important indicator of the effectiveness and efficiency of a software testing process and may also identify risky areas that may need more testing. 10.4.1 Time We may measure many things during testing with respect to time and some of them are given as: 1) Time required to run a test case. 2) Total time required to run a test suite. 3) Time available for testing 4) Time interval between failures 5) Cumulative failures experienced upto a given time 6) Time of failure 7) Failures experienced in a time interval A test case requires some time for its execution. A measurement of this time may help to estimate the total time required to execute a test suite. This is the simplest metric and may estimate the testing effort. We may calculate the time available for testing at any point in time during testing, if we know the total allotted time for testing. Generally unit of time is seconds, minutes or hours, per test case. Total testing time may be defined in terms of hours. Time needed to execute a planned test suite may also be defined in terms of hours. When we test a software, we experience failures. These failures may be recorded in different ways like time of failure, time interval between failures, cumulative failures experienced upto given time and failures experienced in a time interval. Consider the table 10.5 and table 10.6 where time based failure specification and failure based failure specification are given: Sr. No. of failure occurrences Failure time measured in minutes Failure intervals in minutes 1 12 12 2 26 14 3 35 09 4 38 03 5 50 12 6 70 20 7 106 36 8 125 19 9 155 30 10 200 45 Table 10.5: Time based failure specification Time in minutes Cumulative failures Failures in interval of 20 minutes 20 01 01 40 04 03 60 05 01 80 06 01 100 06 00 120 07 01 140 08 01 160 09 01 180 09 00 200 10 01 Table 10.6: Failure based failure specification These two tables give us the idea about failure pattern and may help us to define the following: 1) Time taken to experience n failures 2) Number of failures in a particular time interval 3) Total number of failures experienced after a specified time 4) Maximum / minimum number of failures experienced in any regular time interval. 10.4.2 Quality of source code We may know the quality of the delivered source code after reasonable time of release using the following formula: Where WDB: Number of weighted defects found before release WDA: Number of weighted defects found after release The weight for each defect is defined on the basis of defect severity and removal cost. A severity is assigned to each defect by testers based on how important or serious is the defect. A lower value of this metric indicates the less number of error detection or less serious error detection. We may also calculate the number of defects per execution test case. This may also be used as an indicator of source code quality as the source code progressed through the series of test activities [STEP03]. 10.4.3 Source Code Coverage We may like to execute every statement of a program at least once before its release to the customer. Hence, percentage of source code coverage may be calculated as: The higher value of this metric given confidence about the effectiveness of a test suite. We should write additional test cases to cover the uncovered portions of the source code. 10.4.4 Test Case Defect Density This metric may help us to know the efficiency and effectiveness of our test cases. Where Failed test case: A test case that when executed, produced an undesired output. Passed test case: A test case that when executed, produced a desired output Higher value of this metric indicates that the test cases are effective and efficient because they are able to detect more number of defects. 10.4.5 Review Efficiency Review efficiency is a metric that gives insight on the quality of review process carried out during verification. Higher the value of this metric, better is the review efficiency. 10.5 Software Quality Attributes Prediction Models Software quality is dependent on many attributes like reliability, maintainability, fault proneness, testability, complexity, etc. Number of models are available for the prediction of one or more such attributes of quality. These models are especially beneficial for large-scale systems, where testing experts need to focus their attention and resources to problem areas in the system under development. 10.5.1 Reliability Models Many reliability models for software are available where emphasis is on failures rather than faults. We experience failures during execution of any program. A fault in the program may lead to failure(s) depending upon the input(s) given to a program with the purpose of executing it. Hence, time of failure and time between failures may help us to find reliability of software. As we all know, software reliability is the probability of failure free operation of software in a given time under specified conditions. Generally, we consider the calendar time. We may like to know the probability that a given software will not fail in one month time or one week time and so on. However, most of the available models are based on execution time. The execution time is the time for which the computer actually executes the program. Reliability models based on execution time normally give better results than those based on calendar time. In many cases, we have a mapping table that converts execution time to calendar time for the purpose of reliability studies. In order to differentiate both the timings, execution time is represented byand calendar time by t. Most of the reliability models are applicable at system testing level. Whenever software fails, we note the time of failure and also try to locate and correct the fault that caused the failure. During system testing, software may not fail at regular intervals and may also not follow a particular pattern. The variation in time between successive failures may be described in terms of following functions: () : average number of failures upto time () : average number of failures per unit time at time and is known as failure intensity function. It is expected that the reliability of a program increases due to fault detection and correction over time and hence the failure intensity decreases accordingly. (i) Basic Execution Time Model This is one of the popular model of software reliability assessment and was developed by J.D. MUSA [MUSA79] in 1979. As the name indicates, it is based on execution time (). The basic assumption is that failures may occur according to a non-homogeneous poisson process (NHPP) during testing. Many examples may be given for real world events where poisson processes are used. Few examples are given as: * Number of users using a website in a given period of time. * Number of persons requesting for railway tickets in a given period of time * Number of e-mails expected in a given period of time. The failures during testing represents a non-homogeneous process, and failure intensity decreases as a function of time. J.D. Musa assumed that the decrease in failure intensity as a function of the number of failures observed, is constant and is given as: Where : Initial failure intensity at the start of testing. : Total number of failures experienced upto infinite time : Number of failures experienced upto a given point in time. Musa [MUSA79] has also given the relationship between failure intensity () and the mean failures experienced () and is given in 10.1. If we take the first derivative of equation given above, we get the slope of the failure intensity as given below The negative sign shows that there is a negative slope indicating a decrementing trend in failure intensity. This model also assumes a uniform failure pattern meaning thereby equal probability of failures due to various faults. The relationship between execution time () and mean failures experienced () is given in 10.2 The derivation of the relationship of 10.2 may be obtained as: The failure intensity as a function of time is given in 10.3. This relationship is useful for calculating present failure intensity at any given value of execution time. We may find this relationship Two additional equations are given to calculate additional failures required to be experienced to reach a failure intensity objective (F) and additional time required to reach the objective. These equations are given as: Where : Expected number of additional failures to be experienced to reach failure intensity objective. : Additional time required to reach the failure intensity objective. : Present failure intensity : Failure intensity objective. and are very interesting metrics to know the additional time and additional failures required to achieve a failure intensity objective. Example 10.1: A program will experience 100 failures in infinite time. It has now experienced 50 failures. The initial failure intensity is 10 failures/hour. Use the basic execution time model for the following: (i) Find the present failure intensity. (ii) Calculate the decrement of failure intensity per failure. (iii) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (iv) Find the additional failures and additional execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated using the following equation: (b) Decrement of failure intensity per failure can be calculated using the following: (c) Failures experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (ii) Logarithmic Poisson Execution time model With a slight modification in the failure intensity function, Musa presented logarithmic poisson execution time model. The failure intensity function is given as: Where : Failure intensity decay parameter which represents the relative change of failure intensity per failure experienced. The slope of failure intensity is given as: The expected number of failures for this model is always infinite at infinite time. The relation for mean failures experienced is given as: The expression for failure intensity with respect to time is given as: The relationship for additional number of failures and additional execution time are given as: When execution time is more, the logarithmic poisson model may give large values of failure intensity than the basic model. Example 10.2: The initial failure intensity of a program is 10 failures/hour. The program has experienced 50 failures. The failure intensity decay parameter is 0.01/failure. Use the logarithmic poisson execution time model for the following: (a) Find present failure intensity. (b) Calculate the decrement of failure intensity per failure. (c) Determine the failure experienced and failure intensity after 10 and 50 hours of execution. (d) Find the additional failures and additional and failure execution time needed to reach the failure intensity objective of 2 failures/hour. Solution: (a) Present failure intensity can be calculated as: = 50 failures = 50 failures = 0.01/falures Hence = 6.06 failures/hour (b) Decrement of failure intensity per failure can be calculated as: (c) Failure experienced and failure intensity after 10 and 50 hours of execution can be calculated as: (i) After 10 hours of execution (ii) After 50 hours of execution (d) and with failure intensity objective of 2 failures/hour (iii) The Jelinski Moranda Model The Jelinski Moranda model [JELI72] is the earliest and simples software reliability model. It proposed a failure intensity function in the form of Where = Constant of proportionality N = total number of errors present i = number of errors found by time interval ti. This model assumes that all failures have the same failure rate. It means that failure rate is a step function and there will be an improvement in reliability after fixing an error. Hence, every failure contributes equally to the overall reliability. Here, failure intensity is directly proportional to the number of errors remaining in a software. Once we know the value of failure intensity function using any reliability model, we may calculate reliability using the equation given below: Where is the failure intensity and t is the operating time. Lower the failure intensity and higher is the reliability and vice versa. Example 10.3: A program may experience 200 failures in infinite time of testing. It has experienced 100 failures. Use Jelinski-Moranda model to calculate failure intensity after the experience of 150 failures? Solution: Total expected number of failures (N) = 200 Failures experienced (i) =100 Constant of proportionality () = 0.02 We know = 2.02 failures/hour After 150 failures = 0.02 (200-150+1) =1.02 failures/hour Failure intensity will decrease with every additional failure experience. 10.5.2 An example of fault prediction model in practice It is clear that software metrics can be used to capture the quality of object oriented design and code. These metrics provide ways to evaluate the quality of software and their use in earlier phases of software development can help organizations in assessing a large software development quickly, at a low cost. To achieve help for planning and executing testing by focusing resources on the fault prone parts of the design and code, the model used to predict faulty classes should be used. The fault prediction model can also be used to identify classes that are prone to have severe faults. One can use this model with respect to high severity of faults to focus the testing on those parts of the system that are likely to cause serious failures. In this section, we describe models used to find relationship between object oriented metrics and fault proneness, and how such models can be of great help in planning and executing testing activities [MALH09, SING10]. In order to perform the analysis we used public domain KC1 NASA data set [NASA04] The data set is available on www.mdp.ivv.nasa.gov. The 145 classes in this data were developed using C++ language. The goal of our analysis is to explore empirically the relationship between object oriented metrics and fault proneness at the class level. Therefore, fault proneness is the binary dependent variable and object oriented metrics (namely WMC, CBO, RFC, LCOM, DIT, NOC and SLOC) are the independent variables. Fault proneness is defined as the probability of fault detection in a class. We first associated defects with each class according to their severities. The value of severity quantifies the impact of the defect on the overall environment with 1 being most severe to 5 being least severe. Faults with severity rating 1 were classified as high severity faults. Faults with severity rating 2 were classified as medium severity faults and faults with severity rating 3, 4, 5 as low severity faults as at severity rating 4 no class was found to be faulty and at severity rating 5 only one class was faulty. Table 10.7 summarizes the distribution of faults and faulty classes at high, medium, and l ow severity levels in the KC1 NASA data set after preprocessing of faults in the data set. Level of severity Number faulty of classes % of faulty classes Number of faults % of Distribution of faults High 23 15.56 48 7.47 Medium 58 40.00 449 69.93 Low 39 26.90 145 22.59 Table 10.7: Distribution of Faults and Faulty Classes at High, Medium and Low Severity Levels The min, max, mean, median, std dev , 25% quartile and 75% quartile for all metrics in the analysis are shown in table 10.8. Metric Min. Max. Mean Median Std. Dev. Percentile (25%) Percentile (75%) CBO 0 24 8.32 8 6.38 3 14 LCOM 0 100 68.72 84 36.89 56.5 96 NOC 0 5 0.21 0 0.7 0 0 RFC 0 222 34.38 28 36.2 10 44.5 WMC 0 100 17.42 12 17.45 8 22 LOC 0 2313 211.25 108 345.55 8 235.5 DIT 0 6 1 1 1.26 0 1.5 Table 10.8: Descriptive Statistics for metrics The low values of DIT and NOC indicate that inheritance is not much used in the system. The LCOM metric has high values. Table 10.9 shows the correlation among metrics, which is an important static quantity. Metric CBO LCOM NOC RFC WMC LOC DIT CBO 1 LCOM 0.256 1 NOC -0.03 -0.028 1 RFC 0.386 0.334 -0.049 1 WMC 0.245 0.318 0.035 0.628 1 LOC 0.572 0.238 -0.039 0.508 0.624 1 DIT 0.4692 0.256 -0.031 0.654 0.136 0.345 1 Table 10.9: Correlations among Metrics The correlation coefficients shown in bold are significant at 0.01 level. WMC, LOC, DIT metrics are correlated with RFC metric. Similarly, the WMC and CBO metrics are correlated with LOC metric. Therefore, it shows that these metrics are not totally independent and represents redundant information. The next step of our analysis found the combined effect of object oriented metrics on fault proneness of class at various severity levels. We obtained from four multivariate fault prediction models using LR method. The first one is for high severity faults, the second one is for medium severity faults, the third one is for low severity faults and the fourth one is for ungraded severity faults. We used multivariate logistic regression approach in our analysis. In a multivariate logistic regression model, the coefficient and the significance level of an independent variable represent the net effect of that variable on the dependent variable in our case fault proneness. Tables 10.10, 10.11, 10.12 and 10.13 provide the coefficient (B), standard error (SE), statistical significance (sig), odds ratio (exp(B)) for metrics included in the model. Two metrics CBO and SLOC were included in the multivariate model for predicting high severity faults. CBO, LCOM, NOC, SLOC metrics were included in the multivariate model for predicting medium severity faults. Four metrics CBO, WMC, RFC, and SLOC were included in the model predicted with respect to low severity faults. Similarly, CBO, LCOM, NOC, RFC, SLOC metrics were included in the ungraded severity model. Metric B S.E. Sig. Exp(B) CBO 0.102 0.033 0.002 1.107 SLOC 0.001 0.001 0.007 1.001 Constant -2.541 0.402 0.000 0.079 Table 10.10: High severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.190 0.038 0.0001 1.209 LCOM -0.011 0.004 0.009 0.989 NOC -1.070 0.320 0.001 0.343 SLOC 0.004 0.002 0.006 1.004 Constant -0.307 0.340 0.367 0.736 Table 10.11: Medium severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.167 0.041 0.001 1.137 RFC -0.034 0.010 0.001 0.971 WMC 0.047 0.018 0.028 1.039 SLOC 0.003 0.001 0.001 1.003 Constant -1.447 0.371 0.005 0.354 Table 10.12: Low severity faults model statistics Metric B S.E. Sig. Exp(B) CBO 0.195 0.040 0.0001 1.216 LCOM -0.010 0.004 0.007 0.990 NOC -0.749 0.199 0.0001 0.473 RFC -0.016 0.006 0.006 0.984 SLOC 0.007 0.002 0.0001 1.007 Constant 0.134 0.326 0.680 1.144 Table 10.13: Ungraded severity fault model statistics To validate our findings we performed 10-cross validation of all the models. For the 10-cross validation, the classes were randomly divided into 10 parts of approximately equal (14 partitions of ten data points each and 1 partition of five data points each). The performance of binary prediction models is typically evaluated using confusion matrix (see table 10.14). In order to validate the findings of our analysis, we used the commonly used evaluation measures sensitivity, specificity, completeness, precision, and ROC analysis. Observed Predicted 1.00 (Fault-Prone) 0.00 (Not Fault-Prone) 1.00 (Fault-Prone) True Fault Prone (TFP) False Not Fault Prone (FNFP) 0.00 (Not Fault-Prone) False Fault Prone (FFP) True Not Fault Prone (TNFP) Table 10.14: Confusion matrix Precision It is defined as the ratio of number of classes correctly predicted to the total number of classes. Sensitivity It is defined as the ratio of the number of classes correctly predicted as fault prone to the total number of classes that are actually fault prone. Completeness Completeness is defined as the number of faults in classes classified fault-prone, divided by the total number of faults in the system. Receiver Operating Characteristics (ROC) Curve The performance of the outputs of the predicted models was evaluated using ROC analysis. The ROC curve, which is defined as a plot of sensitivity on the y-coordinate versus its 1-specificity on the x coordinate, is an effective method of evaluating the quality or performance of predicted models [EMAM99]. While constructing ROC curves, we selected many cutoff points between 0 and 1, and calculated sensitivity and specificity at each cut off point. The optimal choice of the cutoff point (that maximizes both sensitivity and specificity) can be selected from the ROC curve [EMAM99]. Hence, by using the ROC curve one can easily determine optimal cutoff point for a model. Area under the ROC Curve (AUC) is a combined measure of sensitivity and specificity. In order to compute the accuracy of the predicted models, we use the area under ROC curve. We summarized the results of cross validation of predicted models via the LR approach in Table 10.15. Model I Model II Model III Model IV Cutoff Sensitivity Specificity Precision Completeness AUC SE 0.25 64.60 66.40 66.21 59.81 0.686 0.044 0.77 70.70 66.70 68.29 74.14 0.754 0.041 0.49 61.50 64.20 63.48 71.96 0.664 0.053 0.37 69.50 68.60 68.96 79.59 0.753 0.039 Table 10.15: Results of 10-cross validation of models The ROC curve for the LR model with respect to the high, medium, low, and ungraded severity of faults is shown in 10.4. Based on the findings from this analysis, can use the SLOC and CBO metrics in earlier phases of software development to measure the quality of the systems and predict which classes with higher severity need extra attention. This can help management focus resources on those classes that are likely to cause serious failures. Also, if required, developers can reconsider design and thus take corrective actions. The models predicted in the previous section could be of great help for planning and executing testing activities. For example, if one has the resources available to inspect 26 percent of the code, one should test 26 percent classes predicted with more severe faults. If these classes are selected for testing one can expect maximum severe faults to be covered. There are many school of thoughts about the usefulness and applications of software metrics. However, every school of thought accepts the old quote of software engineering i.e. You cannot improve what you cannot measure; and you cannot control what you cannot measure. In order to control and improve various activities, we should have something to measure such activities. This something differs from one school of thought to another school of thought. Despite different views, most of us feel that software metrics help to improve productivity and quality. Software process metrics are widely used such as capability maturity model for software (CMM-SW) and ISO9001. Every organization is putting serious efforts to implement these metrics. 10.5.3 Maintenance effort prediction model The cost of software maintenance is increasing day by day. The development may take 2 to 3 years, but same software may have to be maintained for another 10 or more years. Hence, maintenance effort is becoming an important factor for software developers. The obvious question is how should we estimate maintenance effort in early phases of software development life cycle?. The estimation may help us to calculate the cost of software maintenance which a customer may like to know as early as possible in order to plan the costing of the project. Maintenance effort is defined as number of lines of source code added or changed during maintenance phase. A model has been used to predict maintenance effort using Artificial Neural Network (ANN) [AGGA06, MALH09]. This is a simple model and predictions are quite realistic. In this model, maintenance effort is used as a dependent variable. The independent variables are eight object oriented metrics namely WMC, CBO, RFC, LCOM, DIT, NOC, DAC, and NOM. The model is trained and tested on two commercial software products User Interface Management System (UIMS) and Quality Evaluation System (QUES), which are presented in [LI93]. UIMS system consists of 39 classes and QUES system consists of 71 classes The ANN network used in model prediction belongs to Multilayer Feed Forward networks and is referred to as M-H-Q network with M source nodes, H nodes in hidden layer and Q nodes in the output layer. The input nodes are connected to every node of the hidden layer but are not directly connected to the output node. Thus, the network does not have any lateral or shortcut connection. Artificial Neural Network (ANN) repetitively adjusts different weights so that the difference between desired output from the network and actual output from ANN is minimized. The network learns by finding a vector of connection weights that minimizes the sum of squared errors on the training data set. The summary of ANN used in model for predicting maintenance effort is shown in table 10.16. Architecture Layers 3 Input Units 8 Hidden Units 9 Output Units 1 Training Transfer Function Tansig Algorithm Back Propagation Training Function TrainBR Table 10.16: ANN Summary The ANN was trained by the standard error back propagation algorithm at a learning rate of 0.005, having the minimum square error as the training stopping criterion. The main measure used for evaluating model performance is the Mean Absolute Relative Error (MARE). MARE is the preferred error measure for software measurement researchers and is calculated as follows [FINN96]: where: estimate is the predicted output by the network for each observation n is the number of observations To establish whether models are biased and tend to over or under estimate, the Mean Relative Error (MRE) is calculated as follows [FINN96]: We use the following steps in model prediction: 1. The input metrics are normalized using min-max normalization. Min-max normalization performs a linear transformation on the original data. Suppose that minA and maxA are the minimum and maximum values of an attribute A. It maps value v of A to v in the range 0 to 1 using the formula: 2. Perform principal components (P.C) analysis on the normalized metrics to produce domain metrics. 3. We divide data into training, test and validate sets using 3:1:1 ratio. 4. Develop ANN model based on training and test data sets. 5. Apply the ANN model to validate data set in order to evaluate the accuracy of the model. The P.C. extraction analysis and varimax rotation method is applied on all metrics. The rotated component matrix is given in table 10.17. Table 10.17 shows the relationship between the original object oriented metrics and the domain metrics. The values above 0.7 (shown in bold in table 10.17) are the metrics that are used to interpret the PCs. For each PC, we also provide its eigenvalue, variance percent and cumulative percent. The interpretations of PCs are given as follows: * P1: DAC, LCOM, NOM, RFC and WMC are cohesion, coupling and size metrics. We have size, coupling and cohesion metrics in this dimension. This shows that there are classes with high internal methods (methods defined in the class) and external methods (methods called by the class). This means cohesion and coupling is related to number of methods and attributes in the class. * P2: MPC is coupling metric that counts number of send statements defined in a class. * P3: NOC and DIT are inheritance metrics that count number of children and depth of inheritance tree in a class. P.C. P1 P2 P3 Eigenvalue 3.74 1.41 1.14 Variance % 46.76 17.64 14.30 Cumulative % 46.76 64.40 78.71 DAC 0.796 0.016 0.065 DIT -0.016 -0.220 -0.85 LCOM 0.820 -0.057 -0.079 MPC 0.094 0.937 0.017 NOC 0.093 -0.445 0.714 NOM 0.967 -0.017 0.049 RFC 0.815 0.509 -0.003 WMC 0.802 0.206 0.184 Table 10.17: Rotated principal components We employed ANN technique to predict the maintenance effort of the classes. The inputs to the network were all the domain metrics P1, P2, and P3. The network was trained using the back propagation algorithm. Table 10.16 shows the best architecture, which was experimentally determined. The model is trained using training and test data sets and evaluated on validation data set. Table 10.18 shows the MARE, MRE, r and p-value results of ANN model evaluated on validation data. The correlation of the predicted change and the observed change is represented by the coefficient of correlation (r). The significant level of a validation is indicated by a p-value. A commonly accepted p-value is 0.05. MARE 0.265 MRE 0.09 r 0.582 p-value 0.004 Table 10.18: Validation results of ANN model ARE Range Percent 0-10% 50 11-27% 9.09 28-43% 18.18 44% 22.72 Table 10.19: Analysis of model evaluation accuracy For validate data sets, the percentage error smaller than 10 percent, 27 percent and 55 percent is shown in table 10.19. We conclude that impact of prediction is valid in the population. 10.5 plots predicted number of lines added or changed vs actual number of lines added or changed. Software testing metrics are one part of metrics studies and focus on the testing issues of processes and products. Test suite effectiveness, source code coverage, defect density and review efficiency are some of the popular testing metrics. Testing efficiency may also be calculated using size of software tested/resources used. We should also have metrics to provide immediate, real time feedback to testers and project manager on quality of testing during each test phase rather waiting until the release of the software. MULTIPLE CHOICE QUESTIONS Note: Select most appropriate answer of the following questions. 10.1 One fault may lead to (a) One failure (b) Two failures (c) Many failures (d) All of the above 10.2 Failure occurrences can be represented as: (a) Time to failure (b) Time interval between failures (c) Failure experienced in a time interval (d) All of the above 10.3 What is the maximum value of reliability? (a) (b) 1 (c) 100 (d) None of the above 10.4 What is the minimum value of reliability? (a) 0 (b) 1 (c) 100 (d) None of the above 10.5 As the failure intensity decreases reliability: (a) Increases (b) Decreases (c) No effect (d) None of the above 10.6 Basic and logarithmic execution time models were developed by: (a) Victor Baisili (b) J.D. Musa (c) R. Binder (d) B. Littlewood 10.7 Which is not a cohesion metric? (a) Lack of cohesion in methods (b) Tight class cohesion (c) Response for a class (d) Information flow cohesion 10.8 Which is not a size metric? (a) Number of attributes per class (b) Number of methods per class (c) Number of children (d) Weighted methods per class 10.9 Choose an inheritance metric? (a) Number of children (b) Response for a class (c) Number of methods per class (d) Message passing coupling 10.10 Which is not a coupling metric? (a) Coupling between objects (b) Data abstraction coupling (c) Message passing coupling (d) Number of children 10.11 What can be measured with respect to time during testing? (a) Time available for testing (b) Time to failure (c) Time interval between failures (d) All of the above 10.12 NHPP stands for (a) Non-homogeneous poisson process (b) Non-hetrogeneous poisson process (c) Non-homogeneous programming process (d) Non-hetrogeneous programming process 10.13 Which is not a test process metric? (a) Number of test cases designed (b) Number of test cases executed (c) Number of failures experienced in a time interval (d) Number of test cases failed 10.14 Which is not a test product metric? (a) Time interval between failures (b) Time to failure (c) Estimated time for testing (d) Test case execution time 10.15 Testability is dependent on (a) Characteristics of the representation (b) Characteristics of the implementation (c) Built in test capabilities (d) All of the above 10.16 Which of the following is true? (a) Testability is inversely proportional to complexity (b) Testability is directly proportional to complexity (c) Testability is equal to complexity (d) None of the above 10.17 Cyclomatic complexity of code provides (a) An upper limit for the number of test cases needed for the code coverage criterion (b) A lower limit for the number of test cases needed for the code coverage criterion (c) A direction for testing (d) None of the above 10.18 Higher is the cyclomatic complexity: (a) More is the testing effort (b) Less is the testing effort (c) Infinite is the testing effort (d) None of the above 10.19 Which is not the object oriented metric given by chidamber and Kemerer: (a) Coupling between objects (b) Lack of cohesion (c) Response for a class (d) Number of branches in a tree 10.20 Precision is defined as: (a) Ratio of number of classes correctly predicted to the total number of classes (b) Ratio of number of classes wrongly predicted to the total number of classes (c) Ratio of total number of classes to the classes wrongly predicted (d) None of the above 10.21 Sensitivity is defined as: (a) Ratio of number of classes correctly predicted as fault prone to the total number of classes (b) Ratio of the number of classes correctly predicted as fault prone to the total number of classes that are actually fault prone (c) Ratio of faulty classes to total number of classes (d) None of the above 10.22 Reliability is measured with respect to (a) Effort (b) Time (c) Faults (d) Failures 10.23 Choose an event where poisson process is not used (a) Number of users using a website in a given time interval (b) Number of persons requesting for railway tickets in a given period of time (c) Number of students in a class (d) Number of e-mails expected in a given period of time 10.24 Choose a data structure metric: (a) Number of live variables (b) Variable span (c) Module weakness (d) All of the above 10.25 Software testing metrics are used to measure (a) Progress of testing (b) Reliability of software (c) Time spent during testing (d) All of the above Exercises 10.1 What is software metric? Why do we really need metrics in software? Discuss the areas of applications and problems during implementation of metrics. 10.2 Define the following terms: (a) Measure (b) Measurement (c) Metrics 10.3 (a) What should we measure during testing? (b) Discuss things which can be measured with respect to time (c) Explain any reliability model where emphasis is on failures rather than faults 10.4 Describe the following metrics: (a) Quality of source code (b) Source code coverage (c) Test case defect density (d) Review efficiency 10.5 (a) What metrics are required to be captured during testing? (b) Identify some test process metrics (c) Identify some test product metrics 10.6 What is the relationship between testability and complexity? Discuss the factors which are affecting the software testability. 10.7 Explain the software fault prediction model. List out the metrics used in the analysis of the model. Define precision, sensitivity and completeness. What is the purpose of using receiver operating characteristics (ROC) curve. 10.8 Discuss the basic model of software reliability. How can we calculate and? 10.9 Explain the logarithmic poisson model and find the values of and . 10.10 Assume that initial failure intensity in 20 failures / CPU hr. The failure intensity decay parameter is 0.05 / failure. We assume that 50 failures have been experienced. Calculate the current failure intensity. 10.11 Assume that the initial failure intensity is 5 failures / CPU hr. The failure intensity decay parameter is 0.01 / failure. We have experienced 25 failures upto this time. Find the failures experienced and failure intensity after 30 and 60 CPU hrs of execution. 10.12 Explain the Jelinski Moranda model of reliability theory. What is the relationship between and t. 10.13 A program is expected to have 500 faults. Assumption is that one fault may lead to one failure. The initial failure intensity is 10 failures / CPU hr. The program is released with a failure intensity objective of 6 failures / CPU hr. Calculate the number of failures experienced before release. 10.14 Assume that a program will experience 200 failures in infinite time. It has now experienced 100. The initial failure intensity was 10 failures / CPU hr. (a) Determine the present failure intensity (b) Calculate failures experienced and failure intensity after 50 and 100 CPU hrs. of execution. Use the basic execution time model for the above calculations. 10.15 What is software reliability? Does it exist? Describe the following terms: (i) MTBF (ii) MTTF (iii) Failure intensity (iv) Failures experienced in a time interval

Thursday, May 14, 2020

What Are the Properties of the Alkaline Earth Metals

The alkaline earth metals are one group of elements on the periodic table. The elements highlighted in yellow on the periodic table in the graphic belong to the alkaline earth element group. Here is a look at the location and the properties of these elements: Location of the Alkaline Earths on the Periodic Table The alkaline earths are the elements located in Group IIA of the periodic table. This is the second column of the table. The list of elements that are alkaline earth metals is short. In order of increasing atomic number, the six element names and symbols are: Beryllium (Be)Magnesium (Mg)Calcium (Ca)Strontium (Sr)Barium (Ba)Radium (Ra) If element 120 is produced, it will most likely be a new alkaline earth metal. Presently, radium is the only one of these elements that is radioactive with no stable isotopes. Element 120 would be radioactive, too. All of the alkaline earths except magnesium and strontium have at least one radioisotope that occurs naturally. Properties of the Alkaline Earth Metals The alkaline earths possess many of the characteristic properties of metals. Alkaline earths have low electron affinities and low electronegativities. As with the alkali metals, the properties depend on the ease with which electrons are lost. The alkaline earths have two electrons in the outer shell. They have smaller atomic radii than the alkali metals. The two valence electrons are not tightly bound to the nucleus, so the alkaline earths readily lose the electrons to form divalent cations. Summary of Common Alkaline Earth Properties Two electrons in the outer shell and a full outer electron s shellLow electron affinitiesLow electronegativitiesRelatively low densitiesRelatively low melting points and boiling points, as far as metals are concernedTypically malleable and ductile. Relatively soft and strong.The elements readily form divalent cations (such as Mg2and Ca2).The alkaline earth metals are very reactive, although less so than the alkali metals. Because of their high reactivity, the alkaline earths are not found free in nature. However, all of these elements do occur naturally. They are common in a wide variety of compounds and minerals.These elements are shiny and silver-white as pure metals, although they usually appear dull because they react with air to form surface oxide layers.All the alkaline earths, except for beryllium, form corrosive alkaline hydroxides.All of the alkaline earths react with halogens to form halides. The halides are ionic crystals, except for beryllium chloride, which is a covalent compound. Fun Fact The alkaline earths get their names from their oxides, which were known to humankind long before the pure elements were isolated. These oxides were called beryllia, magnesia, lime, strontia, and baryta. The word earth in this use comes from an old term used by chemists to describe a nonmetallic substance that did not dissolve in water and resisted heating. It wasnt until 1780 that Antoine Lavoisier suggested the ​earths were compounds rather than elements.​​​

Wednesday, May 6, 2020

The Effects Of Sleep Deprivation - 710 Words

Sleep Deprivation. How lack of sleep affects you? I bet everyone stayed awake for 24 hours at least once a life. Why has sleep deprivation become such a big issue? Firstly, the global human problem is that teenagers, especially high school or college students spend their nights on parties, in front of the computers playing games or some of them even doing their homework. As well, a lot of workers or workaholics spend their nighttime doing tasks and willing to skip sleep in order to impress their bosses. The second thing is a modern generation, these days people want to receive food, entertainment, and information 24/7, because of this, consumers have to stay awake much longer and sometimes they don’t even have time to sleep. Not always it†¦show more content†¦Some of symptoms of insomnia are easy to recognize, including mood changes, bad attention, lack of motivation or energy during a day. Insomnia is not attributable to a psychiatric, medical or environmental cause. There are three types of primary insomnia:  "psychophysiological, idiopathic insomnia, and sleep state misperception (paradoxical insomnia)†(Wikipedia). The main symptom of psychological insomnia is paranoia. As regards to Idiopathic insomnia, this type of disease begins science childhood and lasts to the rest of a person’s life. Actually, it is a problem in a part of the brain which responds for sleep-wake cycle as a result in either underactive sleep signals or overactive wake signals. Sleep deprivation concerns to car accidents, too. â€Å"The U.S. Institute of Medicine reports that almost 20 percent of car accidents happen because drivers are sleepy† (Arianna Huffington). Moreover, lack of sleep is a serious problem in the medical profession. According to the research, most of medical residents work for exceedingly long periods, some of them work 30 hours twice a week. Twenty percent of all residents make fatigue-related mistakes that lead to harm patients’ lives and only five percent made mista kes that resulted in a patient’s death. To conclude, my standpoint is that lack of sleep has no positive effects on the human body. Studies show that 20 percent of adultsShow MoreRelatedSleep Deprivation And Its Effect On Sleep849 Words   |  4 PagesSleep is a necessity in which, its value has been undercut. People of all ages, from college students, to middle-aged adult, have experienced the effect of sleep deprivation. Sleep deprivation is has poisoned the positive affects sleep has on ones life. In order to challenge the effects of sleep deprivation and study its effect on me, I conducted an experiment designed by James B. Mass. This experiment was created in 1991 to help students determine if they were truly sleep deprived. Out of the 15Read MoreEffects Of Sleep Deprivation. Sleep Deprivation Is, Irrefutably,962 Words   |  4 PagesEffects of Sleep Deprivation Sleep deprivation is, irrefutably, a massive health concern among Americans. Innumerable studies have been performed in hopes of finding out the perfect amount of sleep for a healthy lifestyle. Even though an average of 8-9 hours of sleep is practically unanimously recommended by health professionals, there is a huge discrepancy between that and the actual amount of sleep that teenagers in America are getting on average. The article â€Å"Go To Bed!† by Kerry Grens describesRead MoreSleep Deprivation And Its Effects1738 Words   |  7 PagesSleep is a major component of human life, taking up almost a third of an individual’s lifetime and allowing the brain to process an individual’s experiences, thoughts, and memories. Proper sleep is vital to maintaining good health, as it is associated with maintenance of many of the body’s processes such as metabolism and disease prevention. This has become difficult as this modernizing world is completely changing th e lifestyles of societies, and therefore the patterns of human sleep, due to schoolRead MoreSleep Deprivation And Its Effects1480 Words   |  6 Pagesnight of sleep, a person may not feel restored and refreshed and sleepy during the day, but be totally unaware that the person is sleep deprived. A person might just think it is just the stress of work, and school or this is just normal the way you normally feel and had no idea that you should feel differently. Sleep is one of the things we need to survive, getting less hours of sleep and not sleeping well is not good for your body both physically and mentally. Every day sleep deprivation in increasingRead MoreSleep Deprivation And Its Effects910 Words   |  4 PagesSleep what is? Seems like it should be an obvious thing to everyone you go to sleep when you’re tired and awake when you’re rested though the unfortunate reality, most of us take sleep for granted and deprive ourselves of a vital our brain needs to functio n at its fullest. First and foremost the concept that everyone needs 8hours of sleep is false for most adults there is slight variation some of us are perfectly capable of functioning with 6 hours others need those few extra winks of 9 hours a nightRead MoreEffects of Sleep Deprivation881 Words   |  4 PagesSleep is an essential part of life. Without sleep, the body does not get the energy that it needs to function. Yet a large amount of people do not get anywhere near the amount of sleep they need. Whether it is because of medical reasons or because there just is not enough time in the day, sleep deprivation is a major problem in todays society. The many people who do not get enough sleep usually end up suffering the consequences. No good can come from not getting enough sleep. Sleep deprivation hasRead MoreEffects And Effects Of Sleep Deprivation1286 Words   |  6 PagesEffects of Sleep Deprivation Sleep is important for good health. Studies show that not getting enough sleep or getting poor quality sleep on a regular basis increases the risk of high blood pressure, heart disease, and other medical conditions. In addition, during sleep, your body produces valuable hormones. These hormones help children grow and help adults and children build muscle mass, fight infections, and repair cells. Hormones released during sleep also affect how the body uses energy. StudiesRead MoreThe Effects Of Sleep Deprivation On Sleep Disorders926 Words   |  4 PagesCommission on Sleep Disorders Research, â€Å"at least 40 million Americans suffer from chronic, long-term sleep disorders† (Sleep Disorders and Sleeping Problems†). There are several causes for the increase in lack of sleep or sleep deprivation; which include, using electronics late at night, using your bed for more than just sleeping, and anxiety. As a result of modern technology, there are many people who every now and then don’t sleep well, but there are several who experience what is called sleep deprivation:Read MoreThe Effects Of Sleep And Sleep Deprivation On The Body1267 Words   |  6 PagesThe Effects of Sleep and Sleep Deprivation Has on the Body. Specific Purpose: To inform the audience about why we need sleep and what happens if you are sleep deprived. Thesis: Sleep is crucial for any living creature and lacking an adequate amount of sleep is detrimental to health. Introduction Attention Getter: Many of us will spend about a third of our lives sleeping, yet don’t know why we need sleep or how important it is for the body. Did you know you can die faster from sleep deprivation thanRead MoreThe Effects Of Sleep Deprivation On Sleep Disorders971 Words   |  4 Pagesbecome more about of society there has been a trend in people who are not getting a full night’s rest. According to the National Commission on Sleep Disorders Research, â€Å"at least 40 million Americans suffer from chronic, long-term sleep disorders† (Sleep Disorders and Sleeping Problems†). There are several causes for the increase in lack of sleep or sleep deprivation; which include, using electronics late at night, using your bed for more than just sleeping, and anxiety. As a result, of more American’s

Tuesday, May 5, 2020

Expressive Arts Activity Essay Example For Students

Expressive Arts Activity Essay Modality: Art- comic strip/drawing/collage The Fit: Discover the clients sense feeling based on a special event in their life. This activity can be implemented with a broad range of clients. Every clients comic/drawing will be unique and the outcome is situational. The client has the potential to show the counselor several indicators while explaining the comic/ drawing. Some of the indicators include the following: abuse, neglect, want/need for attention, and power. Population: Children/adolescents-group/individual This expressive arts activity can be applied to a child Who is experiencing emotional, behavioral, and academic difficulties. Materials: Large piece of plain paper, markers, crayons, colored pencils, construction paper, glue Instructions: 1. The counselor will provide the client with a large piece of plain paper and art supplies of their choice (crayons, markers, colored pencils, construction paper). 2, The counselor will ask the client to imagine a special event in their life that made them either happy or sad. . The counselor will then ask the client how they felt during their special moment. 4. The counselor will tell the client to choose the color of construction paper according to how they were feeling during their special moment. For example: The mock client talks about hove he was excited to open presents. He chose the color yellow. He made himself yellow because yellow means excited to him, At the end of the story he made hims elf black because he said that the color black means that he was mad because his siblings had left him out. S, The client will then create,drama the special moment on the large sheet of vitae paper using their construction paper and other art materials. The client can make a comic strip, timeline of events, etc. 6. Once the client completes their expressive arts project, the counselor Will administer questions. A. Describe your picture. How did you pick your special moment? B. When you look back on your special moment does it make you happy or sad? C. Why does it make you feel this way? D. What are some words you would use to describe the way you feel when you talk about your special moment timeline? E. Who is friendly with whom in the picture? Who is not friendly in the picture? F. Who is accepted? G. Who has the power? *Depending on the special moment timeline, questions will vary with the client. Mock client scenario: The client is a six-year-old male who has an older sibling that is twelve and a younger sibling that is two. The clients special moment inline shoves him and his family on Christmas morning. The boy begins his story by walking down the stairs to open up his gifts. As his comic continues he has positioned himself away from his siblings as he is opening up his gifts, While the counselor asks him questions about his timeline he proceeds to say that he always feels left out, and that his mom and dad show more love to his siblings. At the end of his timeline he draws his two siblings going outside to play with the toys that they got for Christmas, and he asks the question, M/ho will play with me and my new toys? Goals of Deadlier Theory: I. Relationships 2. Assessment . Understanding and insight 4. Reorientation and reeducation Phases of Deadlier play therapy: 1. Relationship development 2. Exploration of lifestyle 3. Goals of behavior 4. Faulty thinking 5. Maladaptive behaviors 6. Facilitation Birth Order: Idlers S psychological positions: 1. Oldest child- prefers to be first, receives more attention, spoiled 2. Second of only two- behaves as if in a race, usually opposite to first child 3. Middle- feels left out, surrounded by competitors. 4. Youngest- baby, a lot of ground to cover in order to catch up 5.