Just Enough Ontology Engineering

This paper introduces 'just enough' principles and 'systems engineering' approach to the practice of ontology development to provide a minimal yet complete, lightweight, agile and integrated development process, supportive of stakeholder management a…

Authors: P. Di Maio

'Just Enough' Ontology Engineering Pao la Di M ai o Un iv er si ty of St rathc lyd e/IS TCS.o rg 75 M ontro se St r eet Gl asg ow U K pa ol a.di mai o[@gma il.co m ABSTRACT This pa per i ntroduces 'just enough' principles and 'sy stems engineeri ng' app roach to the practice of ontology development to provide a mi nima l yet complete, light weig ht, agile and integrated development pro cess, supportive of stakeholder manag em ent and imple m entat ion independence. General Terms Managem ent, Measurement, Documentation, Performance, Design, Economics, Reliabilit y , Human Factors, Standardization, Languages, Theory , Keywords Ontology, System s Science 1. INTRODUCTION The majority of Information centric systems today are designed to l evera ge knowledg e expressed via natural languag e, where symb ols and meani ngs (semiotics and semant ics) need to be captured and represented adequately for these sys tems to function. Ontologies are conceptual and sema ntic rep resentations widely used, in different forms, to capture and express such models. Ontology Engineer ing Methodologies (also called ontology Development by some) have proliferated in recent years, howeve r how to go about developing an ontology for a general project or organisation can be very resource intensive , require skills and expertise that are 'scarce' , and can be a minefield of uncertainties. A plethora of ontology engineering (OE) methods, artifacts, tools and techniques has surfaced i n recent years, often resulting from academic research dissemination, but OE has not become any easier. The choice of an appro priate ontology development methodology for any project may require a syste matic Permission to make digita l or hard copies of all or part of thi s work for personal or classroom use is granted without fee provided that copie s are not made or dis tributed for profit or c ommercial advanta ge and t hat copies bear this notice and the full citation on the first page. To c opy otherwise, or republish , to post on servers or to redistri bute to lists , requires prior spe cific permission an d/or a fee. WIMS’11 , May 25–2 7, 2011, Sogndal, N o rway. Copyright 201 1 ACM 978-1-45 03-014 8-0/1 1/05… $10 .00. evaluation of existing approaches, and this can become extremely resource intensive and time consuming .A default option is to follow no meth odology at all (the 'who needs a me thodology ? ' attitude) often preferred by developers who go straig ht int o coding, and may confuse 'on the fly' sche ma creation with fully fledged ontology development. J EOE incorporates principles of 'just enough ' approaches into OE, wit h the intent to capture and sy nthesize essential aspects of OE independently from the methodology of choice. Just enough ontology engineering does not int end to be a methodology , nor to substitu te for methodologies, rather as an approach to help specialists from various disciplines (and not necessarily experts in mode l building) who may be called to contribute to conceptual modelling efforts, and to equip them with minimal all around competences and understanding of ontology development. ‘H igh order’ OE skills are scarce, but also not particularly use ful unless coupled with practical sy s tems development, ma nag ement, and a good measure of common sense, which unfortun ately , is not always the most salient characteristic of pure ontologists and th eoreticians. 1.1 JEOE in a nuts hell Even the smallest ontology , when well formed and properly ‘g rounded’, can be reused and incorporated into (or at least referenced by) larger ont ologies, thus effectively con tributing to address one of the contemporary challenges which is facili tating the access, validation, m ai ntenanc e and reuse of existing Knowledg e. JEOE consists mainly of a lean and compact set of steps, an agile process that can be adopted t o navigate iteratively across the many interdependent OE activities, essentially providing what any one with limited time and without formal training in logic, mathe matics or philosophy m ay need to know to contribute intelligently to discourse. JEOE can be thought of as 'ontology development for the rest of us' in a nutshel l. It contributes a pragmatic, sy stemic app roach to ontology building which is methodology independent and provides a framew ork t o help incorporate elements of sy stems development principles into the often challenging practice of ontology development. It also serves as didacti c/learning instr umen t for generalists, and as a tool t o guide development across team members with different cognitive and disciplinary backgrounds. Another non trivial problem that J EOE aims to tackle is the lack of publicly available usable documenta tion resources for existing OE methodologies. Many OE papers, especially when author ed by academics, may not be designed for practical use in real world scenarios, nor publicly accessible without subscription to journals. JEOE tackles this limitatio n by developing collaboratively online ' open' (licensed under creative commo ns) documentation and processes. Finally , J EOE considers every ontological ac tivity as a form of boundary setting contributing to define, set and enforce boundaries. Two of t he key contributions of JEOE are 1) a strong emphas is on stakeholder analy sis and manag emen t, and 2) imple menta tion independence, respectively illustrated in the next two pa ragraphs: 1.2 Stakeholders A diverse stakeholder basis is necessary to a balanced mix of views and sustainability of ontologies, especially their use and long term mainten ance. OE consists of a specialized set of activities, requiring depth and breadth of understanding , knowledg e and skills in a variety of fields. It is becoming increasing ly importan t to broaden the stakeholder base and to make this process accessible to as many participants as possibl e, but not at the expense of validity and ‘ontological rigour’, although even validity and rigor depend on where certain boundaries are set. Furtherm ore, stakeholders bring onto the development table a much needed socio-technical perspective, of which people and environmen ts are importan t elements . 1.3 Imple mentation Independence Ontologies, intended as shared conceptual schema s, should be understood and to some extent even manip ulated by a variety of stakeholders with different skill sets (not necessarily s killed in the implementat ion languag e of choice) and us ed as part of sometimes heterogeneous architectures, J EOE advocates clear separation betw een ontology des ign and its implem entat ion. Impleme ntation independenc e is a well established principle in sy st em s science, and it constitutes one of the core features of systems architectures. In the early days of computing it was advocated by Childs [1]. Codd and Dates applied it to the relational model as 'data independence' , [ 2], [ 3] and Model Driven Arch itectures translates the principle into 'platform independence' [ 4]. In Ontology , Tom Gruber [5] advocated imple menta tion independence with the notion of ‘mi nima l encoding bias’ an imp ortant principle that howev er is sometimes disregarded by younger generations of ontologists. Finally , J EOE advocates and guides the development of an ontology by ensuring tha t scope, goal, and most of the conceptual modelling tasks are car ried out in advance of any coding, a nd by not constrain ing the choice of encoding to a single language: in J EOE it does not matter wha t formalism the ontology is eventually represented with , it could be OWL, RDF, DL foo, or any encoding of choice. An ontology can exist as a conceptual model independently of its implemen t ation and encoding. This detail is importa nt to counter a trend where ontologies are developed straigh into the ontology editor of choice, and available only as .owl files wit h little or no other documenta tion.JEOE provides a compact integrated and 'lean' sum mary overview that can be used as a learning tool and a streamlined process t o guide not only development, but also, and especially, the 'manage ment' of ontology development proje cts. At some point someone who may know little or nothing about ontologies, will have to budget, cost, staff, and pay for ontology development proje cts. JEOE provides some backg round to assist with informatio n necessary to t he budgeting scheduling and administrat ion of ontology related projects. Not alw ay s howe ver wha t is currently inc luded in JEOE it going to be by sufficient by itself to solve all ontology development challenges: in some cases more than 'ju st enough ' OE can be required. . 2. A SYSTEMIC WO RLDVIEW A sys tem can be defined as 'a collection of componen ts organized to accomplish a specific function or set of func tions'. The t erm system encompasses individual applications, sy stems in the traditional sense, subsystems , sy stems of systems , product lines, product fami lies, whol e enterprises, and other aggregations of interest. A sy stem exists to fulfil l one or more missions in i ts environ ment. [IEEE 1471]. For a sys temist, mostly everything that exists can be viewe d as a sy stem, or part of one. JEOEs is developed under a sy stemic worldview. In Cresw ell [ 9] worldvie w mean s ‘basic set of beliefs that guides action, howe ver in our research a worldview is ‘preferred conceptual model of the reality under observation', similar to what is referred to a paradigm . Cresswe ll identifies fo ur main worldv iews involved with mixed method research .Our worldv iew can be defined as ‘sy stemic’ , and to some extent ‘transforma tive’, intended respectively as specializations and refineme nt of Cresw ell’s pragmatic worldvie w, where by 'sy st emic refers to worldview that identifies the wides t possible boundary (considers the sy stem as a whole), and transforma tive as a worldvie w capable of delivering system ic change. JEOE considers ontology as a 'sy stem' with many dimensions: for examp le it can be a cognitive t ool (as a conceptual mental model), as well as a knowledg e representation mechan ism in the artificial intelligence sense. A syste ms enginee ring approach is useful to provide consistenc y and coordination for the many disparate ontology developmen t activities when they are considered as a 'sy stem' - but more importantly , the syste m approach helps t o keep in mind that an ontology is part of a greater whole, which is the system ontologies are developed to serve (the web) and t heir deplo y ment in any target environ ment and their intended socio-tech nical usage w ith all their implicat ions. 2.1 Principles of Systems Engineering There is no single, definitive shortlist of wha t 'principles of system s engine ering' consist of, however some of the SE principles have been instrum enta l in the developmen t of agile meth ods [7], namely : • Start with Your Eye on the Finish Line (be pragma tic about an ontology project) • Stakeholder Involvem ent is key JEOE also matches neatly the requirements identified in other recent ontology related methods such a s DIL IGENT [6] name ly Decentralization, Parti al Autonomy ,Iteration, Non-expert builders . 3. JUST ENO UGH APPROACHE S The softwa re engineering practice is relatively new. It is interesting to notice how as sy stems h ave gradually become more complex, sophisticated and larger, code has become more efficient (more func tionality per line of code), and software develop ment methodolog ies have tended to bec ome leaner , more agile. 'Just enough' approaches started to emerge when personal computing was just a prospect, and 'structured' approaches to syste m s analy sis and design promised to capture diagram matic and schemat ic representation of essential aspects of sy stems component s. This was a response to the grow ing demand for non formal (mathema tical) way of expressing systems requiremen ts and func t ionalities, and that would be more articulate than pure narrative description of the sy stems func tionalities. That's wh en structure d charts, data flow, data model diagrams, and data dictionaries, started to become in use, wit h the aim to capture and represent wha t most counts of a give n set of design and modelling activities. Software design is considered an engineer ing activity , but in many way s it is still a bit of an art. So is ontology development. Directly or indirectly struct ured approaches have had a huge influence on m odern enginee ring. In a first person account Tom De Marco [10] writes about the events that led him to devise structured analy sis, with successes and failures. He admits that "Important parts of the Structured Analy sis metho d were and are usef ul and practical, as much so today as ever. Other parts are too domain-s pecific to be generally applicable. And still other parts were simply wro ng". Structured approaches are typically viewed as 'top down' or ‘w aterfa ll’ methods, whic h perhaps explain why despite their i nherent propensity for iteration, can end up not alway s bei ng considered 'agile'. One of the mos t success ful interpretations of the structured approach to data analy sis is to be found in 'Just enough structured analysis',(JESA) [12] a book that offered 'the best of' structured analysis in simplified form. The JESA wiki states: "To day , we’ re too busy to spend much time thinking about anything, and we’re also far too busy to read more than a couple hundred pages of the bare essentials on any topic. W hat we wa nt is “just enough” — enough to give us the basic i dea, enough to get us started, enough to give us a grounding in the fun dament als." This was true in the early days of data modeling , as it remains true today . 'Just enoug h' philosophy has left a mark in much current IT literature and practice. Today, principles of ‘just enough’ thinki ng and structured methodologies inspire leaner yet robust approaches in many fields. Now a 'just enough ' approach is called for in OE, whic h is a discipline in its own righ t, (and to some , a cult), and has grown into an immense body of knowl edge whic h is difficul t, if not impos sible, to absorb and proc ess by the average IT team worki ng on time and budget constrain ts. In addition to adopting diagrammatic notation , a structured approach contributes to consolidate the notion of abstraction into ontology developmen t practice. In data modeling, abstraction is wha t allows to identify and group informa tion assets based on generic common characteristics that exist independently from their time/spac e representation. Abstraction is adopted i n knowl edge mo deling, a s well a s in Object Oriented Design, Unified Modelling Lang uage, Integrated Developme nt Framew ork. Structured methods rely on the notion of ' single abstraction' mecha nism [13], which consists of extracting a top l evel view of different aspe cts of the system, forming the basis for funct ional decomposition', the technique that drills into top level func tions, and breaks them down into sma ller funct ions, whil e preserving the representation of other functional aspects of the sys tems such as inputs, outputs, controls and other mecha nisms . Diagramm atic methods s uch as UML, for example, are used as form of ontological notation, although it is somet imes argued that such diagramm atic notations may not have the 'expressi vity ' required to represent all of the essential ontological formalis m, su ch as axio ms. Patterns, also kno wn as 'design patterns' are a modeling technique that has started to become adopted in OE. [14]. Techniques such as decomposition – as we know it in functional and/or task decomposition – are s ome time s also adopted in some cases t o ontology development, as in the DOGMA [ 15] approach, which decomposes an ontology into an Ontology Base (set of atomic predicates), and a Commi tme nt Lay er (Rules). So various techniques for structuring and abstracting kno wledg e al already adopted. But the learning curve is steep, reality is infinite, and ontology modeling could go on forever. A 'just enough' approach is intended to inspire practiti oners to adopt wha t's needed from where ver they can g et it from (even by mixing and matching different methodolog ies for example) and to se t aside the rest. Simplici ty and minim alis m are golden rules for elegance in any design discipline, although they should not be traded against reliability and robustnes s, as too rushed oversimplificat ion can also lead to undesirable weakne sses. What may well be 'just enough' for one project, may not be enoug h at all for another. 3.1 Methodologies, an overview OE consists of met hods for the design and impleme ntation of ontolo gies in the context of IT , wh ich are generally conceptual and s ema ntic mode ls devised to support various intelligent functions, including information and retrieval in netw ork supported and web based environ ments. Many aspects of OE methodolog ies a re simil ar to software and sy stem development ones and typically revolve around a 'life cycle' . Over recent decades countless such methodologies have emer ged, often evolving o rgan i cally out of each other . They can be compared by evaluating parameters tha t they may have in comm on, for example with Knowledge Engineering meth odologies, the detail of their specification ( as opposed to being just an outline, as JEOE intends to be), whe t her it supports a parti cular knowle dge representation forma lism - say frame r ather than rules, for example - and whe t her and to what exten t they are application- dependent. Some methodologie s are built in to ontology editing pl atform s another his torical factor of comparison has b een their conform ance to soft ware engineering standards, such as t he IEEE 1074-199 5. [16] Com mon ly Known Method ologies: IDEF 5 TOVE DODDLE CLEPE/AFM Cyc method Mike U schold and Martin King’s method Michael Grüninger and Mark Fox’s method KACTUS METHONTOLOG Y SENSUS On-To-Know ledge Onto Clean DIL IGENT, HCOME, OTK m ethodology, Ontology Dev elopment 101 CO4, KASquare, DOGMA (AK EM) SEKT, OnTo Know ledge(OTK) OntoClean BORO DIL IGENT Infor mation systems become larger and complex by the day, and ontologies have become necessary to support their integration and m an age ment. One of the tangible benefits that they provide, is the facilitation of knowledg e reuse and comm un ication, which improves the quality of documenta tion and thus contribute to reduce defect injection, and to the reduction and con tainmen t manag emen t and maintenance costs of any ont ology . The proliferation of methodologies does not resolve the challenge of balancing theoretical competence – such as the in depth s pecialist know ledge of academics, and the more pragmatic need for opt imisin g efforts and resources, a priorit y for organizations, m an agers and to some extent enginee rs. Who make s the decisions in an OE project, wha t budget to allocate, what processes and tools to adopt, are critical factors to the success or failur e, and somehow novel territory where manag erial competenc es are limited. Additionally , when it comes to knowledg e acquisition, usability , and manag ement of ontology users still perceive many weak spots in metho dol ogies, and consider the m not y et sufficient ly matu re and do not adequately meet their requiremen ts. The paper ‘OE, A Reality Ch eck’[17] make the point that “OE research should strive for a unifie d, lightw eig ht and component-base d m ethodolog ical frame work, principally targeted at domain experts”. JEOE constitutes a step in that direction. 4. JEOE ESSENTIALS STEP BY STEP Traditional OE methodologies, with a few exceptions, tend to be based on a waterfal l approach. In JEOE, the sequence of a ctivities is not strictly pre scribed, just recomme nded, but the emphas is is on iteration. Despite the mu ltiplicity and variety of methods that have become available, ontology developme nt is not a 'o ne size fits all' practice, although sound principles and good practices are generally univ ersally applicable. Any proje ct needs to a certain extent be ad hoc , and sound enough to guarantee best use of resources, reliability and stability of the outpu t. It mus t also be agile enoug h to adapt to rapidly evolving circumsta nces, requireme nts a nd digital environme nts. It is up to practi tioners – thanks to a mixture of wis dom and know how - to draw the line and decide how far is far enough. The reminder of this paper introduces the main JEOE steps summa rised in the list below: [1]. Identify stakeholders, outline sta keholder profile [2]. Define the purpose of the ontology (empha sis on representation/indexing , problem solving /reasoning) [3]. Outline requireme nts [4]. Identify and survey existing knowledge sources and existin g ontologies, elicit existin g knowledge Assess why the exist ing kno wledg e resources do not meet the i ntended user requirements, update the requirements with the output of activities above [iterative [5]. Scoping ontology (definin g the boundaries and level of gran ularity , according to goal and stakeholders require ments) Update the requiremen ts [iterative] [6]. Devise and imple ment quality assurance plan Add quality parameter s to the requirements [iterative] [7]. Define the field of competence to identify the knowl edge boundaries (competence questions) Match the field of competence with the knowl edge source s [8]. Define the ontology artifacts: Vocabulary - Identity conce pts/entities/clas ses - relations, axioms - Refine a nd map vocabularies to artifacts [9]. Transfer conce pts t o ontology lang uage representation: Select know ledge representation formalis ms and annotation depending on stakeholder requiremen t s, scope and goal [10]. Deploy/sy stems integra tion (modular, increme ntal) [11]. Testing Evaluation quality monitoring competence asses sme nt [12]. Publishing [13]. Maintenance / Reuse 4.1 Identifying S takeholders Before coding, some level of a naly sis and design is alway s advisable. Depending on the domain, targe t func tionality and desired degr ee of pre cision, this process, and the set of requirements that results from it, can be tightly or loosely specified and carried out, but it is never completely casual, and requirements s hou ld not be plucked from thin air (as it somet imes happens). They must be elicited from stakeholders, using appropriate requiremen ts analysis techniques. The broader category of ‘stakeho lders’ i n current sy stems design is preferred nowa day s to the narrower category of ‘users’. Stakeholders include users, but also sponsors, investors, technology providers, industry associations, standardization bodies, and other people and roles not necessarily identified at planning stage. A s takeho lder is any one actively involved in the ontology development and its intended use, and anyone whos e interests may be affected by the developmen t of such an ontology . L ikely to end up being a large and very diverse crowd, whic h is wha t should ma ke t he process of OE fu n, b ut th at unfortun ately can cause struggle and waste of resources. This is because some stakeholders come from traditional environm ents where their role and poi nt of view is never challenge d. But the narrower and more 'authoritati ve' t he stakeholder base, the narrower its scope, and to some extent its ability to gain ‘acceptance' , t herefore its propensity to reuse. The ‘stakeholder structure’ or 'ba se' should be identified and profiled at early stage in any ontology development project, it should be kept well involved throughout the su bs equent phases, to avoid ‘disenfr anchise ment’ whi ch can result in lower level of collaboration among di ffer ent proje ct contributors. A stakeholder analy sis process consist s of identify ing stakeholder s, such as persons and roles, and the organizations they belong to, and cluster them according to shared parame ters (common goals, interests, requiremen ts, tasks responsibi lities. 'Stake holder ma nag emen t' sho uld be carried out to leverage the patterns and dy namics of stakeholder groups, their goals, motive s and commit men ts, and to create and sustai n the collaborative moment um that can fue l an ontology developmen t pro ject itself. One of criticisms that semant ic web technologies have faced in recent years, is that they were not really designed with users, or useful ness, in mind. A lot of (publicly funded) t ime and money has been spent to develop tools, platforms and environm ents that we re experime ntal, and sa tisfied a particular curiosity of a researcher or to support a theoretical point of view. Nowada y s, especially in tighteni ng economic conditions, there is incre asing demand for justif ication, and for adoption of good practices. 4.2 Purpose/Goal A goal is intended as a t actical, precise, measurable target achievem ent, while a purposes is overall scopes tend t o be strategic. Ontologies can be use d for a variety of g oals and purposes, and once an ontology is in place, and developed followin g to good practice, i t can be used and manipulated with out restrictions, even for a purpose different from the one that it was intended initia lly . Ontolog ies however are constantly undergoing refinem ent, and if they are not, they should, at least to some e x tent. Havin g a clear goal for the usage and appli cation of the ontology , will help to guide its development, and concentrate t he efforts to fulfill the priority requirements. Ontologies are specifications of a consisten t and explicit view of reality, but both the view, and the reality they represent, change, and this change must be tracked throughout development. Examples of goals are scattered every where in OE literature a nd include: • Support a process execution wit hin a syste m • Improve the ef ficiency of reasonin g • Consolidate and harmonize existing data/informat ion • Provide an abstract, more schemati zed view • Create a consens ual, unif ied view that can serve as sy nthe sis of different v iews • Provide a formal specification • Support integration of data, applications, and sy stems to help minimize design and planning errors caused by lack of domai n kno wledge Clearly establishing the purpose helps to understa nd wha t kind of ontology needs to be developed - for exam ple a content ontology for reusin g know l edg e, a commun ica tion ontology for shari ng know ledge, an Indexing ontology for case retrieval and Meta-ontologies for increased knowl edge representation. But a ‘good’ ontology is going to be usea ble for almost any purpose. Examples of func tions that can be supported or even fully automated, using an ontology [18]: • Consistency checki ng ( properties and value restrictions) • Auto Comple tion of inform ation partially provided by users • Interoperability suppor t (shared conceptualization) • Support validation and verificatio n testing of data (and schemas ) • Config uration support — class terms may be defined so that they contain descriptions of wha t kinds of par ts may be in a sy stem. A simple example of goals can be: “t he o ntology serves as a means to structure and verify the validity of any information set (from a restaurant menu to the diagnose of a complex medical symptom), or “ the ontology serves as a set of parameters for integrity constrain within a given process ”. The ontology ‘goal’ ideally emerges from agree ment/consensus of the stakeholder basis, the members of which probably spend a lot of time arguing, among other things, about what goal should take priority over another, or how two goals may be conflicting. These are generally necessary labor pains for any OE project. What sometimes happens, is that when different stakeholders cannot agree over the priority of a goal for an ontology, this naturally serves as the ontology ‘split point’, where subsets of stakeholders should dedica te themselves to develop one particular aspect of the ontology, according to their priority and preferred goal. (This is also true for any other decision where consensus cannot be achieved). The entire stakeholder base should then only agree on common parameters, for the purpo se of facilitating the mergin g, interoperability and integration of the respective outcomes a t a later stage. Reconciling different stakeholder views can be managed using standard brainstorming and knowledge sharing techniques, supported b y mind mapping tools, or b y any of t he tools designed for this purpose, many of which are free or o pen source. Framing such goals into set specifications could be useful , but should be done so with a d egree of flexibility. Goals should also be periodically revised, as the project requirement and context may change. 4.3 R equirements An ontology may well serve more than one purpose/goal but it will alway s help to structure, analy se, commun icate and share and reuse t heir know ledge about a particular domain, t ask or process. An ontology is not strictly speaking 'softwar e', howe ver it is generally rep resented, used, queried and manipulated using software artifacts. Much of requirements engineering practice applied to softwar e developmen t can be adopte d in OE. For example it is also possible to disting uish to some extent betwee n system/functional requirements, when t hey solve a particular problem, provide a functionality , enforce a constraint, and user requirements, whe n they are designed around user tasks and need, althoug h often the two often overlap. Some ont ologies may originate from t he need of a stakeholder, or a group of stakeholders, to define and specify a given notion, concept, domain, probl em, field of action that they are working on. It can be argued that not every ontology needs a set of requirements, and t hat sometim es ontologies just 'happen' as the result of pulling together the cognitive artifacts of a given task or profession or team. But considering that scientific and technical domains are complex, and informat ion is becoming more challenging to manage , and that the validity of infor mation technology artifacts depends on their accuracy, when developing ontologies, organizations do so by allocating resources. Be it emp loy ee t ime, skills or equipment, specifically to address give n problems, the returns on inve s tmen t are c arefully weig hted agains t results . Did the ontology solve the problem it was developed for? To what extent? At what cost? Whether an ontology meets its requirements will be an important parameter to measure the success or the failure of a project, and to gauge quality evaluation. . Quality targets should be included in the ontology requirements. It is important that the ontology complies with the expected quality parameters, as it is important that it ans wers the ‘compe tency que stions’ 4.3.1 Requireme nts I nput The requirements for an ontology should be developed by taking into account first and foremost the stake holder input, possibly compiled following some struc ture in relation to t he goal and purpose of the ontology , as well as to the other stakeholders inputs. Additional ontology requiremen ts sho uld be derived from an ana ly sis of scope, gr anularity , quality standards, imple m enta tion languag es and environ ments, discussed next, and wh ich must be decided througho ut development, and often in parallel with other activities, and i ntegrated at each iteration poi nt. Some requireme nts are likely to be ‘fixed’, that is, they cannot changed, and others will emerge and evolve during development, in which case both ‘sequen tial’ and ‘iterative’ approa ches t o compiling the requiremen ts for t he ontology can be combined Among desirable top level requirements already di scus sed elsewhe re, some generics to keep in min d are: • declare e x plicitly wha t high level knowl edge (upper level ontology ) it references, • declare explicitly w hat kind of reasoning/i nference supports and it is ba sed on. • be accessible to all the agents/age ncies (this means shared, view able, understandable) • be ‘acceptable’ to all the agents/ag encies from the differe nt perspectives, in terms of point of view, culture, l ang uage, conformance to policy and protocols • ‘usable’ in terms of compatibility with local informa tion sy stems used by agents /agencies 4.4 KNOWLEDGE SOURCES Many ontologies are published on the internet, although not all of them are publicly available and accessible, and sometim es they are protected by intellectual prope rty rights. Whatever ontology is needed, chances are that one already exists, or is being developed, by someone else, howe ver, other existing ontologies may not necessarily comply with the set of requireme nts of the given project at hand. Knowl edge drives decision ma king and behaviours (can be seen as a form of organizational energy the transfer and exchange of whi ch leads to transfor mation) [27] however not all knowledg e is 'factual', and onl y whe n facts are supported by, and ideally linke d to evidence (provenance) that they can be disambigua ted from be liefs and opinions [25] (Figure 1) Figure 1. Fact, Opinion or Belief? (Di Maio, 2010 ) One of the good principles of know l edge reuse prescribes to source and recy cle what already exist, if possible, to the extent that it i s possible. The obvious ‘cost of reuse’ rule also applies, where if the cost of reuse is higher (in terms of acquiring a license, for example, or in terms of decoding an ontology that has been heavi ly commit ted to a formaliza tion), then the choice not to reuse is fully justified. Althoug h it may not alw ay s be possible or convenie nt to reuse existing ontologies, it is good practice to acknowle dge and reference them for completeness. The knowl edge that const itutes the foundation of an ontology , is alway s grounded, elaborated and derived from knowl edge that was existing before, wh ich in some cases is remixed and reinterpreted to suit a novel requirement. Other forms of structured know ledge repositories and that are not ontologies are encyclopedias, libraries, indices, dictionaries, archives, and scientific and technical publications tend to reference and include references to the body of knowledg e (BOK) of any given domain. A lot of know ledge is scattered in various non structured forms, An ontology aims to map, synthe size and resolve the conflicts that exist within the know ledge sources that constitute the body of knowle dge of any give n domain. When developing an ontol ogy , all of the above knowl edge sources should be considered, including other existing ontologies .When the analy sis and summa ry of existing knowl edge sources does still not s atisf y the requireme nt, and does still not provide the ans wer t o the questions being sought, or does not support the desired func tionality (for example being coded into a particular language) then the scope of the work to be done becomes clearer and better documented. The result of this evaluation would reinforce or modify the initial requirements and specification, taking into account what can be reused. Given the messy s tate of affairs of informa t ion and knowl edge sources today (muc h of which is outdated, poorly accessible etc) , then writing up things f rom scratch can sometimes be quicker and more efficient, and more likely to conform to the needs in hand. But existing knowl edge should alway s be audited and inventoried in an OE project. How much one should try to reuse what was there before, and how much one should reinvent a new , is often a ‘just enoug h’ type of decision. 4.4.1 Knowledge Sharing and Reuse At l east two research directions have motivated (and funded) OE research developments of recent y ears: one is the need to provide know l edg e rep resentation mechani sms that are 'sha red', tha t is common ly accessible and understood, so that knowledg e can be reused, and knowl edge flows optimised. Another is provide mecha nisms for artificial intellig ent agents to perform reasoning func t ions. The latter conflict s with t he former whe n intellectual property rights such as patents are also the intended, albeit secondary purpose, for research. It is worth rememberi ng t hat ontologies in general are devised to facilitate the sharing of know ledge, whether among restricted or open agents, whet her these are huma n or artificial. Ge nerally , for kno wledg e to be reused, it needs to be shared. Ontologies provide sets of parameters for knowl edge sharing, and rely on the assu mption that all the constructs and artefacts are shared. Across large domain s, taking into account t he diversity of disciplines, paradigms, axioms, vocabularies, use s, standards, practices, and despite many y ears of OE research and practice, knowl edge sharing good practices (such as shared vocabularies) are still not adopted, or only marginally so, outside the relati vely small kno wledg e engi neering commu nity . A kno wledg e audit can provide an overview of th e (e x plicit) kno wledg e, a nd its qualitative and quantitative c haracteristics, helping to identi fy the loca tion whe re it resides, as well as other information suc h as people and roles involved in their creation and mainte nance, and other organisational processes associated with i t. In related research a Knowledge Audi t Framew ork [19] is devised for the purpose of facilitating sy stematic auditing as well as sharing and reuse, of artefacts and sche mas. 4.5 SCOPING T HE ONTOLOGY Reality is complex, resulting from of the dynam ic combinat ion of what exists, which is not alway s observable, and its underlying dyna mics which are only partially ‘knowa bl e’. Ca uses are often imponderable. Ontologists describe the se layers as ‘leve ls of reality ’. A simple y et effeci ve example is provided by the decomposition of the level s of reality of a simple nut and bolt [20], whereby a compound object is made up of component s which in turn are made up of elements wh ich in turn are made up of particles. At what level of reality is the targe t ontology going to be pitched? That needs to be specified and defined very early , and the outcom e of this decision is also going to constitute part of the specification document. OE addresses the different l evels of reality by specifying ontologies which define reality at the appropriate l evel of granularity , depending on what they are describing, Such distinction is reflected in the differentiat ion between UP PER, DOMAIN, APPLI CATION, TASK, or PROCESS ontology. Segme nting the ontologies according to their scope simplif ies tremendous ly the effort of referencing existi ng knowl edge, as different axioms and paradigms may rule the different portion of reality being addressed, and to specify their intersection, for example, where physics (the science that studies elements) interacts with ergonomics (the science that studies how people work), making the ontology that is being built 'grounded', therefore more stable and reliable and reusable in the future. The ontology specification document should include the degree of forma lity , whic h addresses, which is determined either by the existence and weight of axioms, or by the degree of for mality of the ontology representation languag e (OWL is said best to support axiomatic formalis m, etc). But one s hou ld remember that even the most informal ontological statement s rely implicitly or explicitly o n the existenc e of at least one, axiom. 4.6 QUALITY ASSURANCE One of the established methods to evaluate quality of artifacts, is to d evelop a 'quality model', which should be done during the early stages of the ont ology development, and serve as guidance throughout the project. Quality Models are developed upfront, and used a s target parameters throughout the development, evaluation and testing, Just to reiterate, while testing is generally done at the e nd of the development (or of each itera tion) , quality evaluation can only be p erformed if quality parameters are set u pfront: quality models contain patterns of qualitative and quantitative measurements of various aspects. The quality of a n ontology is sometimes measured across two dimensions: its accuracy and its comprehensiv eness,[22] corresponding to the notions of pre cision and recall in search technology. Almost the entire range of sta ndard testing techniques used in programming consistency integrity, validation, redundancy can be applied to test t he validity of an ontology. A good summary of q uality evaluation criteria for ontology can be found in [21] 4.7 COMPETENCE In addition to using know n software, pro ject quality, and evaluation technique s , ontologies rely on special, specific tests: competency tests checks (also known as “compe tence test”). The competence domain of an ontology indicates the know ledge field that the ontology represents (or should represent). In order to answer any given competency question, the ontology should contain all the knowl edge parameter s necessary to formul ate a correct answ er for that question. C ompete ncy checks are sets of questions used to determine the competency of an ontology . It is us eful to develop these competency questions from stakeholders input and throughout the project lifecy cle. Quality plann ing is done up front, bu t the quality model should be updated dy namical ly and iteratively throughout the pro ject. Different tests can be set up to verify the validity and quality of each part of the ontology , each process within t he ontology development, and carried out correspondingly at each step. 4.8 DEVELOPING THE AR TIF ACTS An ontolog y is defined by the bounda ries that constitute it. This boundary setting starts early in development, as stakeholder and goals are in thems elve s the first set of boundaries. However the real definition of an ontology tightens up whe n getting down to devising its artifacts. Much has been said about what constitutes an ontology . Conceptualisat ions, models, schemas, representations, frame work s. An ontology may take a variety of forms, but it will necessarily include a vocabulary of terms and some specification of their mean ing, such as definitions a nd an indication of how concepts are i nterrelated, which collectively impose a structure on the domain and constrain the possible interpretations of terms [Uschold et al.]. VOCABULA RIES: Ency clo pedias dictionaries, thesauri and vocabularies are fundamenta lly lists words and their definitions, which can include gramma tical, phonetic and ety mological annotations. Thesauri are v ocabularies whe re the semantic association between terms are mapped, while glossaries are alphabetized lists of terms with definition s usua lly appended at the end of documents or reports. Informatio n sys tems adopt vocabularies to support design and documentation, 'data dictionaries' for examples are used to list the entries used in a database. Vocabularies are at t he core of ontologies, t o the point that sometimes they are referred to as being the ontology itself. The y list terms that declare and represent every concept, relations, functio n and axiom, the more an ontology i s form al, the str icter th e defini tion of its vocabulary term s. I n an ontology , the vocabulary ha s more than one func tion, serves an index, and a directory of content. Generic vocabularies can contain more than one definition for each term, but controlled vocabularies do not, as they allow only one definit ion per term and explicitly enumera ted (numbered) terms, which must be unamb iguou s, and non redundant. Vocabulary creation is both an art and a s cience , whi ch leverages principles of informa tion and library science, t he core not ion however is t o keep track of t he words (lexons) used in the ontology itself, as well as in the discussions that lead to the development of an ontology . How to combine different kinds of vocabularies to make the most of organizational knowl edge is currently being researched. In addition t o vocabularies, whic h are used to ‘nam e’ the artifacts, component s are necessary for an ontology to take place: concepts, relations, axioms, discussed briefly below CONCEPTS: Concepts are fundame ntal to our ability to think, express, represent a nd commu nica te, however, defining unam biguou sly and with certainty what constitutes a concept, is rather t ricky , and pushes IT practitioners toward the realm of philosophy, where they can get easily lost and will never come back to IT proper. But t hat’ s a challeng e of OE. Concepts can correspond to things, but also to ‘fuzzy clouds’ of ideas and notions identified by words and related to a certain thing or subject. And even when referring to tangible things, concepts can be abstract, and difficult to be captured. The nearest technique that can be compared to conceptual modeling, is entity modeli ng, or class modelli ng. Concepts can be bro adly divided into cognitive artifacts that support categorization and commu nica tion, and are nece ssary to support human and artificial thinking and reasoning . The purpose of ontologies i s to ma ke the m e xplicit a nd represent them so that they serve a variety of purposes, namely the intended goals. Conceptual categories and thoug hts are closely related to languag e. A concept model can be used to comple ment and extend a functio nal data model, In OE, concepts can be modeled following 'formal concept analy sis (FCA)' , RELATIONS: Most view s of reality are perceived as dy namic combination s of things and entities, kept together by correlations and dependencies. In models of reality , such as ontologies, the semantic i nterdependence between a thing and another is considered a relation. Relations are the cognitive counterparts of dependencies i n the real world. There are differ ent kinds of relations, and there is no single theory that studies them all. A fundam ental representation of a relationship betwee n two concepts is a mathe matical structure denoting it as a set ma pping between the instance s belonging to the two concepts. These ma pping s might be characterized along the dimens ions out lined below • Arity: Typically binary re lationships are of most interest, but relationships can be of arbitrary arity, i.e., we could have 3 or more concepts participating in a relationship. • Cardinality : These constraints are characterized in one of the follow ing ways: 1- 1, many-1, 1-many, or many-many. A more generalized way of representing these cardinality constraints is using a pa ir of numbers that specify the minim um and maximum number of times an instance of a concept can participate in a re lationship. This is a very useful technique for n-ary relationships and also captures partial participation of concepts in relationships. 1-1 and many-1 relationships are functions wh ich can be exploited in various way s. • Direct v/s Transitive Relationsh ips: Some entities might be directly related to e ach other via the ir participation in a common relationship, or might be re lated trans itively to each other via a chain of relationships. • C risp vs. Fuzzy: Most of the current modeling approach e s view relationships as crisp, i.e., for an n- ary relati onship, instances of n concept s are e ither part o f a relationship or not (e.g., is-a, part- of relations). In the case of fuzzy knowledge [Zadeh65], the extension of a relationship may be viewed as a joint probability distributi on on the concepts participating in a relationship. For example semantic similar ity (i.e., proximity) between two entities is an example for fuzzy relations. • P roperties vs. Relations: Properties are special relationships where the ranges of a relationship are values of a data type (e.g., dates, age) as opposed to instances of a concept. • Structural Composition: Re lationships can either be composed ( if they are functional in nature) or combined using join operations to create new relatio nships and associ ations based on existing relationships. Computa tional techniques can be used to identify , discover, validate and evaluation relationships within any given k now ledge a nd reality sche ma ‘R elation’ is a canonic class of any ontology . It is characterized by substan tial properties and formal attributes. Of t he material properties, there are their reality, nature and ty pe and direction of dependency. Of the s econd, there are transitivity , sym metry , reflexivity , and n- ary , or cardinality , t erms, or tuples, of domains, elements, component s, or argu ments). [25] Three things are of importance 1. the compone nt s of relations are of the same kind and sorts, objects, persons, qualities, quantities, times; 2. ordering of relations, their direction, a triadic 'giv ing' , tetradic 'paying ' or triadic 'bet weenn ess' ; 3. the key sense of relationship is represented by the graph, i ndicating its nature and kind: if it's causal relation, temporal relation, spatial relation, semantic relation, l ogical relation, etc. When represented in the context of linguistic representation, relations are called ‘lexical relations’ that in turn can be of many kinds. Generally known are ‘taxonom ic re lations (Sy nony my , h omono my ) or non taxonomic [22] An sam ple set of funda mental ontological relations can be viewed in the Relation Ontolog y , from the OBO Foundry Project [23] . AXIOMS: In ontology, axioms s erve to model sentences that are ‘alway s true’, and they are used to verify the consistency of the ontology , as well as the consiste ncy of the know ledge stored in a knowle dge base. Axio ms are required in support of any logical statement, and the main distinction that should be remembered is that In traditional logic, they are considered self evident and true, wh ile in mathe matics, logical axioms are usual ly statements that are taken to be universally true. Outs ide logic and mathe matics, the term "axiom" i s used loosely for any established principle of some field Axioms tran slate into constraints, which in turn can b e considered as the logical boundaries of an ontology. They can be transformed and mapped easily directly into rules. If an axiom maps to the ru le, then consisting parts of an axiom map to the consisting parts of a rule. [24] The mapping follows: • ontology axiom → rule • axiom statement → rule clause • statement concept → entity in a rule clause • statement relationship → relationship in a rule clause 4.9 IMPLEMENT A TION The imple mentat ion stage s tarts with transferr ing what has been designed on paper using graphical notation, (say using bubbl es and arrows) to ontology languag e representation: It consists of selecting know l edge representation forma lisms and annotation depending on defined stakeholder requiremen ts, scope and goal . According to computer scientist Tom Gruber: W hen we choose how to represent someth ing in an ontology , we are makin g design decisions. To guide and evaluate our designs, we need objective criteria that are founded on the purpose of the resulting artifact, rather than based on a priori notions of natural ness or truth. The mai n principles of OE, as devised around 30 years ago by Gruber [5] , still largely sta nd. They are sum marised below: 1. Clarity . should eff ectively commu nica te t he intended meanin g of defined ter ms. All defi nitions should be documented w i th natural lang uage. [...] 2. Coherence . should be coherent: that is, it should sanction inf erences that are cons istent wit h the de finitions. [...] 3. Extendability. should be designed t o anticipate the uses of the sh ared vocabulary . [...] 4. Minimal encoding bias . The conceptualization should be specified at the knowledg e level withou t depending on a particular s y mbol-level encoding. An encoding bias results when representation choices are made purely for the convenience of notation or imple menta tion. Encoding bias should be minim ized, because knowledg e-sharing agents may be impl emented in different rep resentation sy stems and sty les of representation. 5. Minimal ontological commit m ent . should require the minim al ontological comm itme nt sufficie nt to support the intended know ledg e-sharing activities [...] Principle 4, minimal encodi ng bias , states that an ontology should be independent of i ts implem entatio n, wh ereby the coding should not run the development process. This is someth ing that systems designer and informat ion architects know very well. Concepts can be modeled using mind maps, lattices, Petri nets, and other abstract, diagramm atic and graphical representations. After the conceptual modeling is done, the goals are set, the requiremen ts specified, and the artifacts outlined, it is time to start t hinki ng about actually encoding. Encoding an ontology means to start assigning roles, properties, and values to each term pointing to an artifact. That’s when the appropriate ontology representation languag e is selected unle ss a specific code set/ impleme ntatio n option is part of the initial requirement, in which case some encoding decision may well precede other aspects of the development and inevitably influenc e i t. The know ledge contained in ontologies can be expressed with different knowl edge representation formal isms and languages capable of supporting logica l assertions — from frame s to semant ic netw orks and axioms, wh ere t he most common formal notation in use is description logic. On the Web, the current standar d “gramm ar” for ontology representation i s the Resource Description Framework (RDF) and OWL.To have t he ontology exist in a var iety of notations and format s, not only in RDF and OWL, means that ontology can effectively be work ed, manip ulated, and view ed in different env ironment s and not just by ontology editors and Semantic W eb browsers of one kind. However, to test and assess the reasoning that an ontology is capable of supporting, having an implem entat ion or work ing prototype i s necessary . An ontology can satisfy competence tests based on paper models; howe ver, it is only when imple mented t hat a sy stem will be able to check to see if it works and if it helps reasoning or totally warps it. Documenting the ontology development process as well as the implem entat ion/encoding process is impor tant. Espec ially because during testing and evaluat ion, it will be necessary to be able t o identify if errors are actual conceptual flaws in the model or are caused by flaws and errors in the actual imple menta tion and encoding of a correct conceptual model. 4.10 Deployment Having an o nt ology a ll done, impleme nted, and work i ng is probably never going to happen as a sing le event because it is like ly to rema in a continuous, stage d process. Reaching the end of development process i s only a beginnin g and the completion of an iteration. Using the ontology to serve its purpose means integ rating it wit h the rest of the informat ion systems environmen t it needs to work with and that can never be defined entirely . I t is likely to evolve together with the configur ation of the sy stem and infras tructure. An ontology or an ontology module can be released as a compact, standalo ne, all-in- one artifact that can be zipped up and downloaded as a sing le file; howeve r, defining what part of an informa tion sy stem or infrastruc ture that the ontology should interface with and relate to — as well as wha t level of i ntegr ation the ontology should support — is another aspect of ontology development that should be pl anned early and possibly f orm part of the requireme nts. 4.1 1 T esting and V alidation In OE, testing should be considered a subse t of the overall quality assura nce activity plan. The validity of different parts of an ontology mus t be differentiated, as each and every aspect of the conceptual and semantic m odel is not necessarily directly related to the quality and validity of the ontology code and implement ation. The robustne ss of the conceptual model and the validity of the semantic artifacts should be verified separately from the code, althoug h integrated tests should also be carried out over both. Almost the entire range of s tandard testing techniques used in programmi ng such as consistency , integrity , validation, redundancy testing, as well as usability testing can be adapted to test an ontology . The key is to isolate the component/aspects that need to be verified and t o set up the cor responding set of tests for each artifa ct or set of artifacts (sometimes referred to as “units ”), as well as carry ing out overall performanc e checks.. An ontology should be tested for accuracy — the ability to support correct inference — in relation to the expected compe tence range. It should be tested to verify its ability to identify and represent correctly l ing uistic intensiona lity and extensiona lity , for example, wher e the first refers to the “aboutness” of an expression, and the latter refers to the ability t o capture and represent the context for the intende d meaning . 4.12 Publishing An ontology can represent the most abstract form knowl edge representation of a task or domain. It contains, in distilled form, a big portion of intelligence of whatever is defined. By definition, publishing means making it accessible to others where it can be found, referenced, and used. The first issue is about IP control and what level of public access should be assigned to the ontology . This is not a technical issue, and the level of disclosure of the ontology should conform to the general IP poli cy of the organization that is developing the ontology . Provided the ontology is for public disclosure and appropriate license is attached to it, publishing ge nerally entails ma king it available on the Web or other univer sally accessible repository and m ak ing it discoverable. There are certain ontology sche mas repositories available on the Web such as Schema Web, but in a linked data model it does not matter where the ontology files are published because a semant ic connection between this file and other corresponding objects on the Web would enable and facilitate its discovery 4.13 Maintenance/Reuse In our world of rapid changes, the lifecy cle of an ontology is directly related to its ability t o maintai n its currency . There is only one way that t his can realistically be done and that is t o support dyna mic updates. This may well become possible in the future, but at the momen t it’s just a good plan. Ontologies still need maintainors and curators to make sure that they do not become obsolete too quickly . Certain facts are not likely to change very often; howe ver, other thing s change regularly and need updating. Think of policy , legislation, and security . An ontology w i ll need to be maint ained and updated periodic ally , and thi s must be set up as an ongoing task that will extend well beyond development. In J EOE we bunch ontology align ment, merg ing, and reuse under “maint enance” because they all represent an early step in the lifecy cle of the next on tology , thus comple ting a loop. 5. OTHER JEO E RESOURCES Aside from an agile process to guide development in way to maximise stakeholder involveme nt and reduce the risks of failure by identifying common stumbling blocks and sugg esting workarounds and manag emen t decisions to avoid wasting resources and getting stuck in dead ends, JEOE aims to point to essential knowledg e resources that constitute a minimal core bod y of knowle dge for the practice, offering a min imal and hopeful ly lucid Even before being consolidated as a set of core steps, J EOE started of an approach, a sy stems engineering survival guide to ontology development, reflecting an engineers attitude to life and the universe: a determination t o get things to work , one way , or another, and characterised by extreme flexibili ty and an ability to make the right decision at the right time. Engineers, and 'sy stems enginee rs' in particular, view every thing as a 'sy stem' and tend to tackle every problems or challenge 'sy stemica lly '. Its hard to prescribe wha t that is, as a great deal of heuristics goes into a sy stemic appro ach, generally guided by experience. The first version of JEOE was published y Cutter Consortium, whose founders are also the early pioneers of Just Enough principles. A current open online version of JEOE exists as a website designed for future reference and open collaborative development of the same http://w ww .justenougho nt ology .co.uk/ 6. ACKNOWLEDGMENTS Thanks to t hose who generously share expertise and resources on the open web, especially Ed Yourdon and Tom De Marco, for inspiring and guiding just enough thinking towards JEOE, Cutter Consortiu m for publishing the first JEOE version. Knowle dge Reuse and Knowledge Auditing Frame work related research partly funded by EPSRC Grant Number , EP/D50 5461/1 for which the author thanks the Unive rsity of Strathcly d e. REFERENCES [1].C hilds, David L. Feasibility of a set-theoretic data structure : a general structure (Computer science) 1968 [2].T.Cod d "A Relational Model of Data for Large Shared Data Banks" (ACM 1970) [3].C.Date, P.Hopewell "Storage Structures and Physical Data Independence" 19 71. [4].O MG Model Driven Architecture, http://www .omg.org/mda/ [5].Gruber. T, 1993 Toward Princip les for the Design of Ontologies Used for Knowledge Sharing (page 4, principle 4 Minimal encoding bias [6]. International C ouncil of S y stems Engineering, INCOSE, ww w.incose.org [7].RU P Rational Unified Process, IBM http://www .dot.n d.gov/divisions/maintenance/d ocs/Ov erview OfSEA.pdf From IBM Rational [8].D I LIGENT htt p://www . uni- koblenz.de/~staab/Research/Pu blications/ 2009/han dbo okEdition2/ diligent- handbook.pdf [9].C ressw ell, J.W. (2003) Chapter 1: A framework for design, in Research design: qualitative, quantitative and mixed methods. Sage Pu blications, Thousand Oaks, CA. [10]. De Marco, T 'Structured Analy sis' in M. Broy, E. Denert (Eds.): Softw are Pio neers Springer-Verlag Berlin Heidelberg 200 0 [11]. http:// w ww -sst.informatik.tu- cottbus.de/~d b/doc/Pe ople/Bro y / Software- Pioneers/DeMa rco_new .pdf [12]. Yourdon, E JESA Just Enough Structure d Analy sis [13]. Barsalou, L. W. (200 5). Abstraction as a dynam ic construal in perceptual sy mbol systems . In L. Gershkof f-Stowe,. Rakinson, & David (Eds.), Building object categories in developmental ti me. Hillsdale, NJ: Erlba um [14]. Blomqvist, Eva, and Kurt Sand kuhl. “Patterns in Ontology Engineering: Classification of Ontology Patterns.” Proceedings from the 7th International Conference on Enterprise Information Sy stems (ICEIS), Miami, Florida, [15]. Meersman, Robert, and Mustafa Jarrar. “An Architecture for Practical Ontol ogy Engineering and Deploy ment: The DOGMA Approach.” VUB STARL ab, Vrije Univerisiteit Brussel, Belgium, 2002- 2003 [16]. Fernández-López, Mariano, and Asunción Góme z-Pére. “Overview and Analysis of Methodolo gies for Building Ontol ogies.” Knowledge Engineering Review , Vol. 17, No. 2, June 2002, pp. 129-156. [17]. Paslaru Bonta s, Elana, and Chri stoph Tempich. “Ontology Engineering: A Reality Check.” Proceedings from the 5t hInternational Conference on Ontologies, Databases, and Applications of Semantics. Lecture Notes in Computer Science (LNCS), Vol. 4275, Springer, 2006 [18]. McGuinness, Deborah L. “Ontologies Come of Age.” InSpinnin g the Semantic Web: Bringing the World Wide Web to ItsFull Pote ntial, MIT Press, 2003 [19]. Knowledge Audit Framework, ISTCS.org http://ti ny url.com/KAFRAM [20].West, Matthew Levels of reality in ISO 15 926 and Shell's Downstream Data Model Presented at Levels of Reality conference, Bolzano 200 7 [21]. Stvilia, Besiki. “A Model for Ontology Qualit y Evaluation.” First Monday, Vol. 12, No. 12, 3 Decem ber 2007 [22]. Vazifedoost.Oroumchian Rahgozar Finding similarity relations in presence of taxonomic relations in onto logy learning sy stems 2007 [23]. OBO Foundr y (http://obofou ndry.org/ro [24]Va silecas, Olegas, and Viln ius Gediminas. “Apply ing the Meta-Model Based Approach to t he Transform ation of Ontology Axioms into Rule Model.” I n formation Technology and Control, Vol.36, No. 1A, 20 07 (http://itc. ktu.lt/itc361 / Bugaite361.pdf) [25] Di Maio, P 'Provenance for Fact Checking and Decision Su pport' Provenance and Linked Data Workshop, E- Science Institute, University of Edinburgh March 2011

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment