Basics of PLC Programming
5 pages
English
Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres

Basics of PLC Programming

-

Le téléchargement nécessite un accès à la bibliothèque YouScribe
Tout savoir sur nos offres
5 pages
English

Description

  • mémoire - matière potentielle : stores input
  • mémoire
  • mémoire - matière potentielle : a plc
  • cours magistral
  • mémoire - matière potentielle : organization
  • mémoire - matière potentielle : the plc
  • mémoire - matière potentielle : space
  • mémoire - matière potentielle : map
  • mémoire - matière potentielle : a plc system
Lecture – PLC Programming Basics MME 486 – Fall 2006 1 of 62 Basics of PLC Programming Industrial Control Systems Fall 2006
  • user program
  • data file portion of memory stores input
  • data file organization for slc-500 controller
  • memory of the plc
  • input image table file
  • numerical data
  • data

Sujets

Informations

Publié par
Nombre de lectures 26
Langue English
Poids de l'ouvrage 1 Mo

Exrait

© 20002003, Steve Easterbrook
University of Toronto
What are Nonfunctional Requirements?
1
University of Toronto
ÜFunctional vs. NonFunctional ÄFunctional requirements describe what the system should do Øthings that can be captured in use cases Øthings that can be analyzed by drawing sequence diagrams, statecharts, etc. ØFunctional requirements will probably trace to individual chunks of a program ÄNonfunctional requirements are global constraints on a software system Øe.g. development costs, operationallity,costs, performance, reliabi maintainability, portability, robustness etc. ØOften known as the “ilities” ØUsually cannot be implemented in a single module of a program ÜThe challenge of NFRs ÄHard to model ÄUsually stated informally, and so are: Øoften contradictory, Ødifficult to enforce during development Ødifficult to evaluate for the customer prior to delivery ÄHard to make them measurable requirements Øey’ve been metW e’dlike to state them in a way that we can measure how well th
© 20002003, Steve Easterbrook
© 20002003, Steve Easterbrook
ÜProduct vs. Process? ÄProductoriented Approaches ØFocus on system (or software) quality ØAim is to have a way of measuring the product once it’s built ÄProcessoriented Approaches ØFocus on how NFRs can be used in the design process ØAim is to have a way of making appropriate design decisions ÜQuantitative vs. Qualitative? ÄQuantitative Approaches ØFind measurable scales for the quality attributes ØCalculate degree to which a design meets the quality targets ÄQualitative Approaches ØStudy various relationships between quality goals Øoffs etc.Reason about trade
University of Toronto
3
Department of Computer Science
Lecture 18: NonFunctional Requirements (NFRs)
2
4
© 20002003, Steve Easterbrook
Department of Computer Science
Department of Computer Science
Approaches to NFRs
ÜDefinitions ÄQuality criteria; metrics ÄExample NFRs ÜProductoriented Software Qualities ÄMaking quality criteria specific ÄCatalogues of NFRs ÄExample: Reliability ÜProcessoriented Software Qualities ÄSoftgoal analysis for design tradeoffs
ÜInterface requirements Ähow will the new system interface with its environment? ØUser interfaces and “userfriendliness” ØInterfaces with other systems ÜPerformance requirements Ätime/space bounds Øworkloads, response time, throughput and available storage space Øe.g. ”the system must handle 1,000 transactions per second" Äreliability Øthe availability of components Øintegrity of information maintained and supplied to the system Øe.g. "system must have less than 1hr downtime per three months" Äsecurity ØE.g. permissible information flows, or who can do what Äsurvivability ØE.g. system will need to survive fire, natural catastrophes, etc
Example NFRs
Department of Computer Science
ÜOperating requirements Äphysical constraints (size, weight), Äpersonnel availability & skill level Äaccessibility for maintenance Äenvironmental conditions Äetc ÜLifecycle requirements Ä“Futureproofing” ØMaintainability ØEnhanceability ØPortability Øexpected market or product lifespan Älimits on development ØE.g development time limitations, Øresource availability Ømethodological standards Øetc. ÜEconomic requirements Äe.g. restrictions on immediate and/or longterm costs.
University of Toronto
University of Toronto
ÜSoftware quality is all about fitness to purpose Ädoes it do what is needed? Ädoes it do it in the way that its users need it to? Äely enough?does it do it reliably enough? fast enough? safely enough? secur Äwill it be affordable? will it be ready when its users need it? Äcan it be changed as the needs change? ÜQuality is not a measure of software in isolation Äit measures the relationship between software and its application domain Øcannot measure this until you place the software into its environment… Ø…and the quality will be different in different environments! Äduring design, we need topredicthow well the software will fit its purpose Øwe need good quality predictors (design analysis) Äduring requirements analysis, we need tounderstandhow fitnessfor purpose will be measured Øis the intended purpose?W hat Øquality factors will matter to the stakeholders?W hat ØHow should those factors be operationalized?
ÜThink of an everyday object Äe.g. a chair ÄHow would you measure it’s “quality”? Øconstruction quality? (e.g. strength of the joints,…) Øaesthetic value? (e.g. elegance,…) Øfit for purpose? (e.g. comfortable,…) ÜAll quality measures are relative Äthere is no absolute scale Äwe can sometimes say A is better than B… Ø… but it is usually hard to say how much better! ÜFor software: Äconstruction quality? Øsoftware is not manufactured Äaesthetic value? Øbut most of the software is invisible Ørginal concernaesthetic value matters for the user interface, but is only a ma Äfit for purpose? ØNeed to understand the purpose
University of Toronto
© 20002003, Steve Easterbrook
University of Toronto
7
ÜQuality Factors ÄThese are customerrelated concerns Øvability, usability,...Examples: efficiency, integrity, reliability, correctness, survi ÜDesign Criteria ÄThese are technical (developmentoriented) concerns such as anomaly management, completeness, consistency, traceability, visibility,... ÜQuality Factors and Design Criteria are related: ÄEach factor depends on a number of associated criteria: ØE.g. correctness depends on completeness, consistency, traceability,... Ødepends on modularity, selfE.g. verifiabilitydescriptiveness and simplicity ÄThere are some standard mappings to help you… ÜDuring Analysis: ÄIdentify the relative importance of each quality factor ØFrom the customer’s point of view! ÄIdentify the design criteria on which these factors depend ÄMake the requirements measurable
Maintainability
© 20002003, Steve Easterbrook
Software Qualities
Factors vs. Criteria
Department of Computer Science
Department of Computer Science
University of Toronto
© 20002003, Steve Easterbrook
5
modifiability
understandability
deviceindependence selfcontainedness accuracy completeness robustness/integrity consistency accountability device efficiency accessibility communicativeness selfdescriptiveness structuredness conciseness legibility augmentability 8
usability
General utility
Department of Computer Science
efficiency
reliability
Boehm’s NFR list Source:See Blum, 1992, p176
Department of Computer Science
6
testability
Asis utility
portability
© 20002003, Steve Easterbrook
Fitness Source:Budgen, 1994, pp589
University of Toronto McCall’s NFR list Source:See van Vliet 2000, pp1113 usability
Product operation
Product revision
Product transition
© 20002003, Steve Easterbrook
integrity
efficiency
correctness
reliability
maintainability
testability
flexibility
reusability
portability
interoperability
Department of Computer Science operability training communicatativeness I/O volume I/O rate Access control Access audit Storage efficiency execution efficiency traceability completeness accuracy error tolerance consistency simplicity conciseness instrumentation expandability generality Selfdescriptiveness modularity machine independence s/w system independence comms. commonality data commonality 9
University of TorontoDe partment of Computer Science Example Metrics Quality Metric Speedtransactions/sec response time screen refresh time SizeKbytes number of RAM chips Ease of Usetraining time number of help frames
Reliability
Robustness
Portability
© 20002003, Steve Easterbrook
meantimetofailure, probability of unavailability rate of failure, availability
time to restart after failure percentage of events causing failure
percentage of targetdependent statements number of target systems
11
University of TorontoDepa rtment of Computer Science Making Requirements Measurable Source:Budgen, 1994, pp601 ÜWe have to turn our vague ideas about quality into measurables examples... The Quality Concepts (abstract notions ofusabilityreliability complexity quality properties)
Measurable Quantities (define some metrics)
Counts taken from Design Representations (realization of the metrics)
© 20002003, Steve Easterbrook
University of Toronto
mean time to failure?
run it and count crashes per hour???
information flow between modules?
count procedure calls???
time taken to learn how to use?
minutes taken for some user task??? 10
Department of Computer Science
Example: Measuring Reliability
ÜDefinition Äthe ability of the system to behave consistently in a useracceptable manner when operating within the environment for which it was intended. ÜComments: ÄReliability can be defined in terms of a percentage (say, 99.999%) ÄThis may have different meaning for different applications: ØTelephone network: the entire network can fail no more than, on average, 1hr per year, but failures of individual switches can occur much more frequently ØPatient monitoring system: the system may fail for up to 1hr/year, but in those cases doctors/nurses should be alerted of the failure. More frequent failure of individual components is not acceptable. ÄBest we can do may be something like: Øation and"...No more than X bugs per 10KLOC may be detected during integr testing; no more than Y bugs per 10KLOC may remain in the system after delivery, as calculated by the Monte Carlo seeding technique of appendix Z; the system must be 100% operational 99.9% of the calendar year during its first year of operation..."
© 20002003, Steve Easterbrook
12
Source:Chung, Nixon, Yu & Mylopoulos, 1999
14
15
© 20002003, Steve Easterbrook
University of Toronto
© 20002003, Steve Easterbrook
16
Ä...BUT, not all bugs are equally important!
ÜExample reliability requirement: Ä“The software shall have no more than X bugs per thousand lines of code” Ä...But is it possible to measure bugs at delivery time? ÜUse bebugging ÄMeasures the effectiveness of the testing process Äa number of seeded bugs are introduced to the software system Øthen testing is done and bugs are uncovered (seeded or otherwise)
Number of bugs= #of seeded bugsx #of detected bugs in system# of detected seeded bugs
13
University of Toronto Department of Computer Science Using softgoal analysis
© 20002003, Steve Easterbrook
ÜDefine ‘fit criteria’ for each requirement ÄGive the ‘fit criteria’ alongside the requirement ÄE.g. for new ATM software ØRequirement: “The software shall be intuitive and selfexplanatory” ØFit Criteria: “95% of existing bank customers shall be able to withdraw money and deposit cheques within two minutes of encountering the product for the first time” ÜChoosing good fit criteria ÄStakeholders are rarely this specific ÄThe right criteria might not be obvious: Øeholders wantThings that are easy to measure aren’t necessarily what the stak ØStandard metrics aren’t necessary what stakeholders want ÄStakeholders need to construct their own mappings from requirements to fit criteria
© 20002003, Steve Easterbrook
University of Toronto
Department of Computer Science
University of TorontoDepartment of Computer Science Example model: Reliability growth Source:Adapted from Pfleeger 1998, p359 ÜMotorola’s Zerofailure testing model ÄPredicts how much more testing is needed to establish a given reliability goal Äbasic model: empirical constants b(t) failures =e testing time ÜReliability estimation process ÄInputs needed: test time Øfd = target failure density (e.g. 0.03 failures per 1000 LOC) Øtf = total test failures observed so far Øth = total testing hours up to the last failure ÄCalculate number of further test hours needed using: ln(fd/(0.5 + fd)) x th ln((0.5 + fd)/(tf + fd)) ÄResult gives the number of further failure free hours of testing needed to establish the desired failure density Øif a failure is detected in this time, you stop the clock and recalculate ÄNote: this model ignores operational profiles!
Measuring Reliability…
ÜGoal types: ÄNonfunctional Requirement ÄSatisficing Technique Øe.g. a design choice ÄClaim Øsupporting/explaining a choice ÜContribution Types: ÄAND links (decomposition) ÄOR links (alternatives) ÄSup links (supports) ÄSub links (necessary subgoal) ÜEvaluation of goals ÄSatisficed ÄDenied ÄConflicting ÄUndetermined
Making Requirements Measurable
Department of Computer Science
University of TorontoDepa rtment of Computer Science NFR Catalogues Source:Cysneiros & Yu, 2004 ÜPredefined catalogues of NFR decomposition ÄProvides a knowledge base to check coverage of an NFR ÄProvides a tool for elicitation of NFRs ÄExample:
© 20002003, Steve Easterbrook
17
  • Accueil Accueil
  • Univers Univers
  • Ebooks Ebooks
  • Livres audio Livres audio
  • Presse Presse
  • BD BD
  • Documents Documents