2. Knowledge-based methods and tools
A knowledge-based system can be defined as a computerised system that uses knowledge about some domain in order to deliver a solution concerning a problem . The first generation of knowledge-based systems was expert systems using a set of facts and rules . This kind of systems is composed of essentially two components: a knowledge base (KB) and an inference engine. It applies specific domain or domain-specific knowledge to problem-specific data to generate problem-specific conclusions . The next KBS generation was the case-based systems. These systems use previous solutions to problems as a guide to solving new problems. Knowledge-based systems are widely acknowledged to be the key for enhancing productivity in industry, but the major bottleneck of their construction is knowledge acquisition, i.e. the process of capturing expertise before implementation in a system . Some methodologies assist the developers in defining and modelling the problem in question, such as Structured Analysis and Generation of Expert Systems (STAGES) andKnowledge Acquisition Documentation System (KADS) (an acronym that has been redefined many times, e.g. Knowledge Acquisition Documentation System andKnowledge-based system Analysis and Design Support). Moreover, these approaches get enriched in order to take into account the project management, organisational analysis, knowledge acquisition, conceptual modelling, user interaction, system integration and design  . Consequently, knowledge modelling in engineering must be based on a rich and structured representation of this knowledge, and an adequate way of user interaction for modelling and using this knowledge . Due to the complexity of engineeringknowledge, knowledge modelling in engineering is a complex task.
A second factor that seems to make relatively little difference to the way the task is performed is class of user: Two major classes of users perform this task: Novices and experts. Novices are not familiar with the system and must learn it at both the conceptual and detailed level; experts know the system well, and may have even written it, but are still not able to maintain a complete-enough mental model of the details. The main differences between novice and expert SEs are that novices are less focused: They will not have a clear idea about which items in the source code to start searching, and will spend more time studying things that are, in fact, not relevant to the problem. It appears that novices are less focused merely because they do not have enough knowledge about what to look at; they rarely set out to deliberately learn about aspects of the system that do not bear on the current problem. The vision of a novice trying to ‘learn all about the system’, therefore seems to be a mirage.
Lessons learned from the systematic reviews demonstrated that an iterative approach can be beneficial when working with domain novices. Literature reviews require a large amount of knowledge, a challenge similar to that of many software development projects. Reviewers should therefore follow the recommendations of software development process experts and use an iterative method instead of a gate-based waterfall method. As the review advances, the perception of the domain by novices change, and the design of the review should evolve accordingly. The research questions may have to be rewritten, the selection procedure may need some adjustment, the extraction forms and analysis tables may require revision, and the synthesis conclusions may need to be redesigned. This approach should produce better and more accurate results iteration after iteration, reflecting the progressive gain in expertise on the part of the novices. It should enable reviewers to calibrate the effort output required through the addition or removal of iterations. In addition, the iSR approach should make it possible to improve a review that ended with only partial results, through the addition of more iteration.
4.4 Linking knowledgeand skills
There is a strong relationship between Knowledge Areas and topics of the SWEBOK and activities of a corporate softwareengineering baseline. An interesting issue arises when we worked to establish the diploma supplement for our immersion curriculum. For each unit of the curriculum, we were required to establish two lists: one for the knowledge covered in the unit and the other for the abilities linked to the unit. We established easily the abilities list from the description of activities and tasks found in the corporate baseline and in the work cards of the curriculum. It was more difficult by far to extract the knowledge really covered by the unit, even with the help of the SWEBOK. Let us present an example with the “Quality Assurance (QA)” unit. During this activity, students perform quality assurance control established in the QA plan, mainly technical reviews and product inspections. Such are the abilities of the QA unit. Let us have a look on the SWEBOK guide about the KA “Software quality”. This KA is divided in 11 subtopics, including the “Reviews and Audits” subtopic. Practically, students learn this subtopic
Our interest in the identification of the fundamental principles of softwareengineering results from work in the development of softwareengineering practice standards. It is widely posited that practice standards should be based upon observation, recording and consensual validation of implemented “best practices.” This strategy has resulted, though, in the development of a corpus of standards that are sometimes alleged to be isolated, unconnected and dis-integrated, because each standard performs a local optimization of a single observed practice. It is hoped that the identification of a set of fundamental principles will provide a broad and rich framework for establishing relationships among groups of practice standards. A set of fundamental principles of the field could also help characterize the activities that differentiate softwareengineering from other computer-related activities and could help better define training programs. The identification of principles viewed as fundamental by the softwareengineering community would also provide a rich framework for analyzing and improving the Guide to the SoftwareEngineering Body of Knowledge (Bourque, 1999). This guide aims to provide a topical access to the core subset of knowledge that characterizes the softwareengineering discipline.
Oracle Designer was chosen in the perspective of a highly-generated implementation from models. Models are produced according to CADM, the companion method of Oracle CASE tools suite. More- over, the information system should be commissioned with several stages through successive work packages. In terms of technique, development tools and organization, highly innovative and strong choices were made. As well-disciplined project managers, we started the project with the tailoring of the TEMPO baseline. We spent about a month writing the Project Plan, especially to define the mana- gerial and the technical processes. The main difficulty was to tailor the software development process with a partial knowledge of the Designer tool and its associated CADM method, but without any real practice. The project team received several weeks of technical training. So, the project started with an optimistic software development process, in accordance with TEMPO requirements and intended to address two different issues :
The whole apprenticeship plan of action is guided by the development process, which defines, among others, the role and the schedule of project stages. A first iteration of the process lets students acquire knowledgeand skills needed for each stage, a second iteration is intended to transform knowledge into abilities and finally the last iteration permits to apply this in a firm.
This apprenticeship plan of action relies on the primary authors of this paper. Before joining the university, both were software project managers for several years within the same software services company. New lecturers or intervening professionals should reinforce this pedagogical team. This new form of teaching reveals clearly three different kinds of activities:
The most commonly used assessment model in the software community is the SW-CMM . It is also important to recognize that ISO/IEC 15504 is an emerging international standard on software process assessments . It defines an exemplar assessment model and conformance requirements on other assessment models. ISO 9001 is also a common model that has been applied by software organizations (usually in conjunction with ISO 9000-1) . Other notable examples of assessment models are Trillium , Bootstrap , and the requirements engineering capability model . There are also maturity models for other software processes available, such as for testing , a measurement maturity model , and a maintenance maturity model  (although, there have been many more capability and maturity models that have been defined, for example, for design, documentation, and formal methods, to name a few). A maturity model for systems engineering has also been developed, which would be useful where a project or organization is involved in the development and maintenance of systems including software . The applicability of assessment models to small organizations is addressed in , where assessments models tailored to small organizations are presented.
tural flaws over a code. It starts by generating a graph describing a run-time architecture using static analysis. Then they assign security properties on the graph of objects. The process is incremental and semi-automatic since the architect gains knowledge about the software architecture by querying and annotating the objects of the graph with security properties such as trust, criticality, etc. The architect defines a security policy as a set of constraints over sets returned by queries. The constraints in this approach are highly dependent on the application and are not generic or reusable. In our approach we aim at fostering reuse. In , the authors present a framework for detecting flaws in the code. The code is first transformed in STRIDE Data Flow Diagrams (DFDs) using static analysis. Then based on a ’best practice’ repository where threat patterns are stored, an automatic check is performed to detect the threats and security measures that may be applied as annotations to DFDs to mitigate these threats. SecureUML  is a mod- eling language for specifying access control requirements in terms of declarative aspects based on Role-based Access Control (RBAC) but extends the latter with authorization constraints to specify dynamic properties in terms of programmatic aspects. Basin et al.  use a metamodel called SecureUML+ComponentUML that combines SecureUML  and ComponentUML (a system design modeling language for component-based sys- tems). This metamodel is used to model security design models and security scenarios starting from an informal security policy. The two artifacts are analyzed by evaluating OCL queries. The evaluation serves on one hand to detect and correct design flaws, if there are any, in the security design model or in the security scenario. On the other hand it allows having information about the allowed accesses of each user. SecureUML is specifically designed for evaluating the authorizations of the access controls of an appli- cation whereas our approach evaluates if the architecture has the necessary mechanisms to mitigate threats. In that sense, the two approaches are complementary. Other works do not formalize security scenarios such as . In this work, Alkussayer et al. report a security scenario-based and risk-based evaluation framework for assessing software archi- tecture. The process generates security scenarios and evaluates threats. If the results are unsatisfactory then a set of security patterns are integrated to mitigate them.
Despite these different interests and perspectives, it follows that the integration of knowledge component can improves the efficiency of the software processes as well as their quality. However, the latest specification of Software & Systems Process Engineering Meta-model (SPEM 2.0) , the OMG‟s “de facto” standard devoted to software process modeling, does not supports this concern. It focuses on a structural view and does not define support for such behavior modeling. That‟s why there is a need to extend this Meta-model to support a knowledge-oriented modeling perspective on the base of activity-oriented one. A typical problem faced by project managers when starting a software project, either new or maintenance, relates to the question: Do we have necessary knowledge to complete the project? Data is required to support an informed decision: For all interrelated activities, which are the unit of work of a given process, is it possible to measure the knowledge required to carry out each task and to map this data to knowledge provided by Roles (primary and additional) as well as input artifacts ? Hence, there is a need for a dashboard that would helps to develop indices of knowledge discrepancies. So we propose a formalism that is based on: 1) the SPEM standard, which is used for building the syntactic structure, and so providing a standardized static structural view; and 2) an extension based on the relationships between components of that structural view, which is used to formalize the semantic relationships between SPEM elements, and so supporting a conceptual view of Knowledge. This formal approach allows process designers to create, as well as to represent, analyze and validate a Knowledge view of process model.
An attempt has to be made to relate the university and industrial phases of the student’s experience. Fortunately, the ability model of our system (that could be considered as the pedagogical objectives) is based on a simplified model of professional activities. So it may ease apprentices to link their competency building and avoid that apprentices were ‘climbing two ladders simultaneously’ and that ascent up the university ladder was unrelated to progress on the other in their firms . Our ability model establishes a structure that directly supports the personal and team construction process of the knowledgeand skills required to practice engineering of a software project. For each ability or transverse competency, the student assesses himself/herself at a maturity level. The assessment scale grows from 1 to 5; - 1 - Smog: vague idea (or even no idea at all); - 2 - Notion: has a notion, a general idea but insufficient to an operational undertaken; - 3 - User: is able to perform the ability with the help of an experienced colleague and has a first experience of its achievement; - 4 - Autonomous: is able to work autonomously; - 5 - Expert: is able to act as an expert to modify, enrich or develop the ability.
10.4.2.1 Prediction of PBR productivity as a function of radiation conditions
Generally, in the current perspective of using mass scale production of algae as a new feedstock source for various applications, predicting productivity is obviously useful (productivity calculations, cultivation system engineering, advanced control settings, etc.). However, the broad variability of sunlight in time and space adds further complexity to the optimization and control of cultivation systems, com- pared with artificial illumination. Modeling can be very helpful in this regard, and the approach was recently extended by the authors to that end by considering specific features of solar use such as (1) direct/diffuse radiation proportions in sunlight, and (2) time variation of the incident light flux and corresponding inci- dent angle on the surface of the cultivation system. All these variables can be obtained from a solar database giving time (day/night, season) and space variabil- ity of solar radiation. They can then be implemented in a PBR model, using the same approach as described above. Besides the specific nature of sunlight (non- normal incident angle, non-negligible diffuse radiation), an important difference lies in the transient nature of sunlight. The transient form of the mass balance equation thus has to be solved (this can be achieved using the routine ode23tb in the Matlab ® software), ultimately allowing the determination of the biomass
Most process environments adopt a top-down approach where the whole process is first modeled by process designer and then deployed in a project where real enactment happens by process actors, i.e., persons operating the process. However, in general it is almost impossible for process designers to model accurately the entire process as they don’t have a thorough knowledge about the process’s application domain. Thus, several details of the process may be left initially unspecified; some execution scenarios may be unseen and the process model does not describe exactly what will be done by process actors. Consequently, deploying a process model in a process environment to manage a project requires an important refinement  that is not evident for process actors. First, they are not trained to work with complicated process modeling languages. Second, they don’t have a global view on the process; each process actor only knows his own activities. For this reason traditional process environments have been still weakly adopted by end-users in system andsoftware industry. To promote the practical use of process environments, we put the main emphasis on process actors and aim at providing
a b s t r a c t
The maintenance management plays an important role in the monitoring of business activities. It ensures a certain level of services in industrial systems by improving the ability to function in accordance with prescribed procedures. This has a decisive impact on the performance of these systems in terms of oper- ational efficiency, reliability and associated intervention costs. To support the maintenance processes of a wide range of industrial services, a knowledge-based component is useful to perform the intelligent mon- itoring. In this context we propose a generic model for supporting and generating industrial lights main- tenance processes. The modeled intelligent approach involves information structuring andknowledge sharing in the industrial setting and the implementation of specialized maintenance management soft- ware in the target information system. As a first step we defined computerized procedures from the con- ceptual structure of industrial data to ensure their interoperability and effective use of information and communication technologies in the software dedicated to the management of maintenance (E-candela). The second step is the implementation of this software architecture with specification of business rules, especially by organizing taxonomical information of the lighting systems, and applying intelligence- based operations and analysis to capitalize knowledge from maintenance experiences. Finally, the third step is the deployment of the software with contextual adaptation of the user interface to allow the man- agement of operations, editions of the balance sheets and real-time location obtained through geoloca- tion data. In practice, these computational intelligence-based modes of reasoning involve an engineering framework that facilitates the continuous improvement of a comprehensive maintenance regime.
The SWEBOK was developed as an international col- lective effort, in order to achieve the goal of providing a consistent global view of softwareengineering. The commit- tee appointed two chief editors, several co-editors to support them, and editors for each of the Knowledge Areas. All chap- ters were openly reviewed, in an editing process that engaged approximately 150 reviewers from 33 countries. Professional and scientific societies, as well as public agencies from all over the world involved in softwareengineering, were con- tacted, made aware of this project, and invited to participate in the review process too. Presentations on the project were made at various international venues. The 2004 edition was revised in 2014, using the same editing process, giving birth in 2014 to the current version (v3) of the SWEBOK [ 4 ]. The SWEBOK has been adopted by ISO and IEC as ISO/IEC TR 19759:2005. 2
4.3.3 International Standards. The references to modeling stan- dards in the SWEBOK (and hence in the MBEBOK) were questioned. Some of the standards currently listed in the SWEBOK do not seem to be relevant or widely used any more by the modeling commu- nity (in particular, most of the tool-related standards). Other ref- erences and documented practices are however highly recognized and widely adopted (e.g. those from the Eclipse Foundation, acting as de-facto standards. Should they be included in the MBEBOK, too? The problem with MBSE-related standards is that they may evolve too rapidly, which is an argument for their exclusion (like for MBSE tools, which for that reason we chose to omit — see Sec- tion 4.4). For example, the SWEBOK mentions the initial versions of UML (1.4, 2.0), now completely superseded. On the other hand, not including any international standard would probably send a wrong message to industry — specially because MBSE practitioners do use standards and do care about interoperability and reusability of the artefacts of the discipline.
Thus, our work is motivated by the following observations. First, the system developer usually uses part of the system model for a specific activity (an analysis to perform on the model). Indeed, some concepts are not useful for a given analysis. For example, real-time analysis does not require all the functional concepts of the analyzed system. There is no explicit definition of the concepts of a given model required by a given model analysis. Second, when performing model analysis, the system developer does not explicitly describe the analysis that he/she used. In order to take design decision, another system developer needs the whole information and hypotheses related to the realized system model analyses ( in fact, the result of an analysis may interfere with the inputs and results of another analysis. Thus information regarding the used method, tool, properties for an analysis may be needed to best evaluate its corresponding output result.). The system developer needs to know for example: what are the performed model analyses (tool, method, inputs, outputs, etc.)? What are the hypotheses made by the other analyses? And what are the parts of the system that have been analyzed?
instance interviewee A’s community copied some sentences from another code of conduct, which were acknowledged explicitly in the text. His opinion was confirmed by interviewee C, since A realized that some communities have similarities that can lead to reuse of codes of conduct. In section 7.3, we also observed that many projects reused another code of conduct. Evolution: Similar to software artefacts, codes of conduct evolve as well. Inter- viewee B explained that their code of conduct has been updated five or six times in 10 years. Starting from a short document that did not seem to be that serious, gradually several ele- ments were changed, for example the phrasing changed from rule-based to value-based, and they added significant details on the leadership of the community, whereas leadership was not a big deal for them at the beginning. He also stated that throughout all of these versions, they only changed the textual expressions, but not the intention of the code of conduct. According to interviewee A, every new suggestion about the code of conduct from their community members is welcomed until there are enough arguments and justifications. In interviewee C’s community, the discussion for the code of conduct is started by the board, then passed around to all members via the member mailing list. This list is where community needs were addressed and implemented as necessary. Afterwards, changes were voted on by board membership.
Knowledgeand Technology Transfer in Materials Science andEngineering in Europe
ing programmes or initiatives in Europe have longer timelines than in North America and East Asia, or they are under-resourced. 9 A clear example of this issue is the lack of administrative simplicity in EU research projects. On the other hand, financial risk aversion leads to delays in implementing new tech- nologies and the time-to-market is often extended by the lack of easy access to loans and financial guarantees. This is critical for advanced materi- als demonstrator projects which usually require quick decisions to speed up their implementation to ensure that the resulting innovative products may be brought to the market in time, a decisive require- ment if there is to be optimal return on investment. Generally, Europe is too slow in exploiting its patents and the application of its research. One rea- son for this, which also contributes to slow response times and hinders implementation in Europe of effective responses in relation to the last two bullet points above, is the lack of a simplified European patent application process. The underlying politi- cal discussion is complex and beyond the scope of the present report. Nonetheless, we must empha- sise the need for a simple and common scheme of patent applications in all EU countries written in one (English) or two languages. A simplified frame- work is considered a crucial ‘must-have’ to accelerate effective transfer of knowledge from academia to industry at the European scale.
2 ICD, Université de Technologie de Troyes, Troyes, France
Abstract. In an increasingly competitive environment in the manufacturing
industry, the control of time, cost and performance enables companies to stand out and take the lead. We are witnessing the proliferation of design aiding solutions that support the designer in his work, and it is through these solutions that developers and researchers aim to improve product development, and control all the aspects of project raised above. One of the activities that are used in product development is Reverse Engineering. This activity allows the extraction of information from an existing physical product. In industry, we may use this activity in order to maintain long-life products, or make a re- design, re-engineering, re-manufacturing, etc. In this paper, we propose an approach that will allow the management of a global reverse engineering process for complex mechanical assemblies.