One way to approach the requirements of changing an implemented enterprise resource planning system would be via so-called model-based architectures. These are application development frameworks that allow software applications to be described in terms of what they must do (the business view of the software) rather than how they must do it (the technical view of the software). Such an approach puts the emphasis on designing the business processes and the business rules up front, to make sure that business functionality is complete and correct before coding begins. For a discussion of the challenges faced in changing an enterprise system post-implementation, see The Post-implementation Agility of Enterprise Systems: An Analysis.
Part Two of the two-part series The Post-implementation Agility of Enterprise Systems: An Analysis.
The modelling approach allows visualization of the intended solution so that business analysts, users, and developers can ensure that business needs are met before implementation in software code renders changes difficult and expensive to make. Using this model, the framework automatically generates the executable application, instead of a programming team manually converting the specifications into software. This code generation increases development efficiency and typically allows generation onto a range of platform choices. For more information, see What's Wrong with Application Software? A Possible Solution—What Is It, Why And How Does It Fit Into Your Future?.
On the other hand, an organization-wide information warehouse (as an intrinsic part of the enterprise transactional backbone system rather than a separate information "twin tower") has to provide such intelligent availability to data throughout the system. The applications should thus reside within a single shared environment in which metadata (data about data) is defined once and made available immediately to financial, procurement, project costing, HR, payroll, and other applications. This data model not only serves as a shared repository of information for the applications, but also acts as an automatically defined "catalogue" for a range of specialized reporting and information delivery tools. In other words, fully integrated analysis, reporting, and communication facilities (with user-defined business views in an "anytime, anywhere, anyway" manner) are becoming a given in people-, project-, and service-centric organizations. Only by providing all these elements within an integrated ecosystem and context can one hope to achieve a rapid response to change.
Increasingly, central governments worldwide are mandating the use of computer automation and the Internet, in order to link and manage the agencies under their authority, and to enable better employee empowerment. The idea is also to enable better provision of local services and accountability to local authorities, boroughs, cities, towns, and ultimately, end users (citizens). In addition, local authorities, health care, or emergency services have an obligation to report their performance to larger governmental units, whether they are states, provinces, cantons, or the central governing bodies. These bodies, in turn, have to publish the results to the electorate. Thus, public officials not only must provide online services to their clientele, but must also be able to retrieve, amalgamate, and report data on the success of their agencies and the status of their progress toward information automation. They require a complete view of management information at all times to make effective judgments about resources and service delivery. Therefore, software applications in support of the public sector must be integrated across all processes, and they must support the internal aspects of the agency or governmental body in its management of data, people, finances, and its particular mission—as well as the public that should, at the end of the day, benefit from the body's charter.
Furthermore, applications using model-based architectures are built on business processes and rules, which allows business analysts to understand and make customizations to the application without compromising the quality of the application. This also obviates complex switches and parameterized tables for configuring the application with simple changes to rules. Custom applications can be built rapidly for very unique businesses or business functions, and such architectures allow for less complexity in the code and significant automation of the software code development, which promises significantly increased application quality.
As seen in Architecture Evolution :From Mainframes to Service-oriented Architecture, the service-oriented architecture (SOA) concept should (in theory) be able to help businesses respond more quickly and cost-effectively to changing market conditions by reconfiguring business processes. It should eventually enable agility, flexibility, visibility, collaboration with trading partners (and between functional and information technology [IT] departments), and so on, by promoting reuse at the coarser service (software component) level, rather than at the more granular (and convoluted) object levels. In addition, SOA (again, in theory) should simplify "plug and play" interconnection and usage of existing IT assets, including the legacy assets.
According to Forrester, from the vantage point of business drivers, the concept should in the long run enable users to adapt their system to processes (and not vice versa); improve system intuitiveness and usability; deliver relevant analytics; connect to external data and services; and leverage readily available best practices and industry knowledge within the vendors' repositories. In the technology lingo, SOA should reduce custom coding through configuration; promote open standards to reduce integration costs; enable end user self-sufficiency (meaning no reliance on nerdy programmers); and provide more flexibility to use best-of-breed products (possibly within composite applications).
Data published in InformationWeek on September 4, 2006 concurs with the above findings. According to a survey conducted by InformationWeek Research, 72 percent of companies hope to achieve increased flexibility in application development by moving to SOA; 61 percent hope to create service-oriented applications faster; 58 percent are aiming at increased software modularity; and 32 are aiming at greater potential for customization.
Still, in addition to still-maturing (and sometimes even conflicting) commonly accepted standards, challenges facing SOA implementation include managing disparate software metadata repositories (meaning frequent data rationalization and replication), disparate software abstraction incompatibility, and appropriate levels of security. Directing and supplying information on the interaction of services can be complicated, since the architecture relies on complex multiple messaging that opens the door to messy code and broken communication, on top of potential non-compliance with government regulations. The flexibility of SOA poses security threats, since these applications engulf services, especially those external to company firewalls. The services are thus more visible to external parties than traditional applications, which is why businesses must set policies to protect who can see exactly what information.
Problems can also arise when users try to connect services that were not developed in the exact same manner, which is very likely if this not controlled within a certain vendor's ecosystem. One of the key goals of SOA is to remove hardwired, purpose-written point-to-point links and replace them with generic links centred around business functions and processes. However, to achieve this, new software components such as orchestration and workflow engines, communication adapters and translators, testing tools, and service locators have to be added to the already complex architecture. Such large, mega implementations might be another opportunity for the consulting giants like IBM that hope to make a small fortune on SOA projects, which rekindles a sense of d�j� vu with respect to the pre-year-2000 (Y2K) salad days. Considering the probable expensive and disruptive upgrades with unproven benefits (and the fact that vendor hype is forging ahead of actual SOA capabilities), many prospective customers might at this stage see these projects as yet another exercise in IT department futility.
Ironically, although seen as helping heterogeneous and legacy environments rejuvenate themselves, SOA might function best within a homogenous domain and context, where data and processes are well aware of each other, as in the case of the Agresso Business World (ABW) or Epicor for Service Enterprises products. Lawson Software has also recently embarked on a major SOA-based product rewrite called Landmark, where the idea is to automatically generate product code (and services), and to avoid the possible SOA traps mentioned above, since the code generator will have all the validation rules and constraints within the context of the scope for the Lawson S3 product (see A New Platform to Battle Software Bloat?).
These provisos aside, we still have a ways to go before post-implementation change becomes a solid, controlled process with built-in management and quality, while providing the business user visualization and evaluation of potential modifications. Ideally, it should assist the business user to understand what the system will look like and how it will operate after the change—in order to avoid surprises and rework, and further user acceptance. The full impact of any change must be known in advance, including the impact on the system, and the cost and time required to make the change. This should be reflected in the "total system," including documentation, user manuals, help text, and so on, as such information will facilitate the management process. However, most of the current SOA developments are far from the promised nirvana of new product releases that will allow any modification to be easily re-evaluated and re-applied as necessary at a realistic time, cost, and quality—to say nothing of modifications that evolve with the needs of the business and the advances made by the vendor.
To recap, while SOA does facilitate standardization, allow for loosely coupled software components (services) assembly and integration, accommodate customized portal-based presentation, and thus perhaps facilitate integration, it is not yet a panacea. Hence, it is a fallacy to expect that the mere concept will turn rigid products written in ancient code into flexible applications providing analytic information that has not been natively enabled, and like benefits. To radically change, the underlying product has to be either properly architected from the ground up (as with Agresso, which likes to compare its agility to a chameleon's ability to adapt to the environment), or totally rewritten in new, modern languages and technologies. For more information, see Rewrite or Wrap-around Old Software. Without true modernization of underlying applications, the SOA embellishments will largely be analogous to "putting makeup on a pig."
While in theory, one can abandon the existing infrastructure and go to an ideal, agile applications world, this will not prove practical for the vast majority of heterogeneous environments. For most of us, the IT world is a mix of multiple applications, technologies, and so on, and the preferred architecture will be the one that can rationalize business processes without ripping out the current investments that most companies have made in applications.
Part Two of the two-part series The Post-implementation Agility of Enterprise Systems: An Analysis.
The modelling approach allows visualization of the intended solution so that business analysts, users, and developers can ensure that business needs are met before implementation in software code renders changes difficult and expensive to make. Using this model, the framework automatically generates the executable application, instead of a programming team manually converting the specifications into software. This code generation increases development efficiency and typically allows generation onto a range of platform choices. For more information, see What's Wrong with Application Software? A Possible Solution—What Is It, Why And How Does It Fit Into Your Future?.
On the other hand, an organization-wide information warehouse (as an intrinsic part of the enterprise transactional backbone system rather than a separate information "twin tower") has to provide such intelligent availability to data throughout the system. The applications should thus reside within a single shared environment in which metadata (data about data) is defined once and made available immediately to financial, procurement, project costing, HR, payroll, and other applications. This data model not only serves as a shared repository of information for the applications, but also acts as an automatically defined "catalogue" for a range of specialized reporting and information delivery tools. In other words, fully integrated analysis, reporting, and communication facilities (with user-defined business views in an "anytime, anywhere, anyway" manner) are becoming a given in people-, project-, and service-centric organizations. Only by providing all these elements within an integrated ecosystem and context can one hope to achieve a rapid response to change.
Increasingly, central governments worldwide are mandating the use of computer automation and the Internet, in order to link and manage the agencies under their authority, and to enable better employee empowerment. The idea is also to enable better provision of local services and accountability to local authorities, boroughs, cities, towns, and ultimately, end users (citizens). In addition, local authorities, health care, or emergency services have an obligation to report their performance to larger governmental units, whether they are states, provinces, cantons, or the central governing bodies. These bodies, in turn, have to publish the results to the electorate. Thus, public officials not only must provide online services to their clientele, but must also be able to retrieve, amalgamate, and report data on the success of their agencies and the status of their progress toward information automation. They require a complete view of management information at all times to make effective judgments about resources and service delivery. Therefore, software applications in support of the public sector must be integrated across all processes, and they must support the internal aspects of the agency or governmental body in its management of data, people, finances, and its particular mission—as well as the public that should, at the end of the day, benefit from the body's charter.
Furthermore, applications using model-based architectures are built on business processes and rules, which allows business analysts to understand and make customizations to the application without compromising the quality of the application. This also obviates complex switches and parameterized tables for configuring the application with simple changes to rules. Custom applications can be built rapidly for very unique businesses or business functions, and such architectures allow for less complexity in the code and significant automation of the software code development, which promises significantly increased application quality.
As seen in Architecture Evolution :From Mainframes to Service-oriented Architecture, the service-oriented architecture (SOA) concept should (in theory) be able to help businesses respond more quickly and cost-effectively to changing market conditions by reconfiguring business processes. It should eventually enable agility, flexibility, visibility, collaboration with trading partners (and between functional and information technology [IT] departments), and so on, by promoting reuse at the coarser service (software component) level, rather than at the more granular (and convoluted) object levels. In addition, SOA (again, in theory) should simplify "plug and play" interconnection and usage of existing IT assets, including the legacy assets.
According to Forrester, from the vantage point of business drivers, the concept should in the long run enable users to adapt their system to processes (and not vice versa); improve system intuitiveness and usability; deliver relevant analytics; connect to external data and services; and leverage readily available best practices and industry knowledge within the vendors' repositories. In the technology lingo, SOA should reduce custom coding through configuration; promote open standards to reduce integration costs; enable end user self-sufficiency (meaning no reliance on nerdy programmers); and provide more flexibility to use best-of-breed products (possibly within composite applications).
Data published in InformationWeek on September 4, 2006 concurs with the above findings. According to a survey conducted by InformationWeek Research, 72 percent of companies hope to achieve increased flexibility in application development by moving to SOA; 61 percent hope to create service-oriented applications faster; 58 percent are aiming at increased software modularity; and 32 are aiming at greater potential for customization.
Still, in addition to still-maturing (and sometimes even conflicting) commonly accepted standards, challenges facing SOA implementation include managing disparate software metadata repositories (meaning frequent data rationalization and replication), disparate software abstraction incompatibility, and appropriate levels of security. Directing and supplying information on the interaction of services can be complicated, since the architecture relies on complex multiple messaging that opens the door to messy code and broken communication, on top of potential non-compliance with government regulations. The flexibility of SOA poses security threats, since these applications engulf services, especially those external to company firewalls. The services are thus more visible to external parties than traditional applications, which is why businesses must set policies to protect who can see exactly what information.
Problems can also arise when users try to connect services that were not developed in the exact same manner, which is very likely if this not controlled within a certain vendor's ecosystem. One of the key goals of SOA is to remove hardwired, purpose-written point-to-point links and replace them with generic links centred around business functions and processes. However, to achieve this, new software components such as orchestration and workflow engines, communication adapters and translators, testing tools, and service locators have to be added to the already complex architecture. Such large, mega implementations might be another opportunity for the consulting giants like IBM that hope to make a small fortune on SOA projects, which rekindles a sense of d�j� vu with respect to the pre-year-2000 (Y2K) salad days. Considering the probable expensive and disruptive upgrades with unproven benefits (and the fact that vendor hype is forging ahead of actual SOA capabilities), many prospective customers might at this stage see these projects as yet another exercise in IT department futility.
Ironically, although seen as helping heterogeneous and legacy environments rejuvenate themselves, SOA might function best within a homogenous domain and context, where data and processes are well aware of each other, as in the case of the Agresso Business World (ABW) or Epicor for Service Enterprises products. Lawson Software has also recently embarked on a major SOA-based product rewrite called Landmark, where the idea is to automatically generate product code (and services), and to avoid the possible SOA traps mentioned above, since the code generator will have all the validation rules and constraints within the context of the scope for the Lawson S3 product (see A New Platform to Battle Software Bloat?).
These provisos aside, we still have a ways to go before post-implementation change becomes a solid, controlled process with built-in management and quality, while providing the business user visualization and evaluation of potential modifications. Ideally, it should assist the business user to understand what the system will look like and how it will operate after the change—in order to avoid surprises and rework, and further user acceptance. The full impact of any change must be known in advance, including the impact on the system, and the cost and time required to make the change. This should be reflected in the "total system," including documentation, user manuals, help text, and so on, as such information will facilitate the management process. However, most of the current SOA developments are far from the promised nirvana of new product releases that will allow any modification to be easily re-evaluated and re-applied as necessary at a realistic time, cost, and quality—to say nothing of modifications that evolve with the needs of the business and the advances made by the vendor.
To recap, while SOA does facilitate standardization, allow for loosely coupled software components (services) assembly and integration, accommodate customized portal-based presentation, and thus perhaps facilitate integration, it is not yet a panacea. Hence, it is a fallacy to expect that the mere concept will turn rigid products written in ancient code into flexible applications providing analytic information that has not been natively enabled, and like benefits. To radically change, the underlying product has to be either properly architected from the ground up (as with Agresso, which likes to compare its agility to a chameleon's ability to adapt to the environment), or totally rewritten in new, modern languages and technologies. For more information, see Rewrite or Wrap-around Old Software. Without true modernization of underlying applications, the SOA embellishments will largely be analogous to "putting makeup on a pig."
While in theory, one can abandon the existing infrastructure and go to an ideal, agile applications world, this will not prove practical for the vast majority of heterogeneous environments. For most of us, the IT world is a mix of multiple applications, technologies, and so on, and the preferred architecture will be the one that can rationalize business processes without ripping out the current investments that most companies have made in applications.
No comments:
Post a Comment