Wednesday, October 21, 2009

Resilient Enterprise Solutions Vendor Displays Sociability and Pragmatic Product Development

In a market noted for its turbulence, the ongoing turnaround success of IFS, a global enterprise applications supplier, has gone somewhat unnoticed. IFS was the fastest growing enterprise resource planning (ERP) supplier in the mid-to-late 1990s. But the early 2000s marked a severely painful period for the vendor, including losses peaking at about $85 million (USD) on revenues of about $313 million (USD) for 2002. The IFS turnaround is impressive, and its management attributes the success to its more focused sales strategy, increased organizational efficiencies, and continued emphasis on a selected, manageable number of vertical industries. For more information on IFS' background, see Enterprise Applications Vendor Reverses Fortunes—But Will Perseverance and Agility Be Enough?

Part Two of the series Enterprise Applications Vendor Reverses Fortunes—But Will Perseverance and Agility Be Enough?

In addition to focusing on profitability and positive cash flow, during its stabilization phase IFS started paying attention to balanced growth (through more reliance on market penetration and product enhancements via strategic partnerships), whereby the product development costs were tied to new sales. To that end, product marketing also started targeting global industry niches such as automotive importer and supplier management (as sub-segments of the much wider automotive vertical), rather than having each country or region pursuing its own target markets. More than half of product development has since been industry-specific, of which half was aimed at manufacturing, about one third at service industries, and the rest at asset-intensive industries.

The consolidation phase has thus brought about pragmatic product developments: the latest product release (IFS Applications 7), which was conceived during this phase (and launched on time in early 2006), required about a quarter of the development effort to "future proof" the product. Furthermore, at the IFS World Conference 2005, the vendor announced the seventh generation of its component-based application suite, which should bring industry focus and productivity to the next level by including innovations in application design, an intuitive graphical user interface (GUI), and information visualization. Based on the IFS SOCA mentioned in Part One of this series, the product aims at offering increased flexibility, enabling companies to continuously revise and improve processes as well as reshape applications as processes change.

A new role-based user interface has been rolled out across the product to enable hyperlinked and much easier navigation. One example of the new GUI use is within a new application in the IFS Applications 7 suite, MaxOEE (Maximum Overall Equipment Effectiveness), which features user productivity and usability, while also including vertical industry depth. Although comparable with other enterprise resource planning (ERP) suppliers' OEE offerings, the module will be even more impressive after being developed into a full-fledged corporate performance management (CPM) solution.

IFS Applications 7 continues to support a growing number of project-driven enterprises and their need for efficient processes, by introducing a broad range of new capabilities, such as contract lifecycle management, project-driven manufacturing, actual costs, revision control, and project supply chain management (SCM). Having designated this segment as one which will drive lots of new business opportunities (especially in the construction and utilities verticals), the IFS industry solution for project-oriented industry (EPCI) will standardize and support user processes in engineering, project management, purchasing, document management, financial management, SCM, human resources (HR), and after-sales activities. The solution also encompasses interfaces to computer-assisted design (CAD) software and planning systems, and includes Web-based solutions for engineering and documentation, where a multi-disciplined Norsk Sokkels Konkuranseposisjon (NORSOK)-based engineering register is integrated with purchasing and delivery (NORSOK being the body that oversees the competitive standing of the Norwegian offshore sector). In addition, the potential for extensive use of portals simplifies cooperation with suppliers in global delivery projects, since portals should allow fast access to information from the total industry solution. IFS has many years of experience as a partner to project-oriented companies, having more than 300 customers with project-based activities. For more on project-oriented requirements and software, see Project-oriented Software: Many Choices, Many Differences.

Supporting the company's focus on seven key vertical industries, IFFS also announced a number of new vertical market solutions, including workforce planning and call center management within the service and facilities management industry, which has to manage service calls from issue to resolution. These new features in IFS Applications 7 aim at enabling more efficient service processes for both independent service providers and companies providing advanced after-sales service (manufacturers of complex products, and asset- and service-intensive businesses). To that end, IFS Applications 7 includes new functionality for call centers; enhanced resource allocation and planning software; and improved mobile field service, project connection, sales contract management, and application for payment. With shorter product lifecycles, more complex products, and tougher competition from developing countries, service management has become a high priority for most companies. Chief financial officers (CFOs), controllers, and financial managers should thus get visibility into the performance metrics of service processes using integrated service performance and financial analysis tools, while service personnel can also access sales and payment histories. Standard features of IFS Applications 7 also include collaborative customer self-service and service enterprise portals, which allow customers to enter cases and service requests, view the progress of the work, and perform follow-up activities themselves. For manufacturers that service what they sell, the suite offers support for product lifecycle management (PLM), enabling improved collaboration between design, engineering, production, and service personnel. IFS service management customers include Dalkia, Debut Services, First Engineering, Anticimex, Philips Business Communication, Munters, and Kalmar Industries.
To help companies reduce the increasing complexity of global supply chains and to increase customer responsiveness while maintaining cost efficiency, IFS Applications 7 also includes new solutions for multi-site supply chain planning (SCP), and collaboration in demand and supply networks. Many industries should benefit from a multi-site functionality which reduces risk for errors in planning, reduces lead time, and accelerates inventory turnaround. Product structures and planning from multiple site locations can be managed centrally to improve the efficiency of goods handling. However, supply chain execution (SCE) capabilities, particularly warehouse management, remain fairly basic, and allow preparation of shipment documents and packaging to go on in parallel with the manufacturing of the products, and reduces time from call-off to delivery due to more efficient internal handling.

Reinforcing the company's aforementioned position as the vendor committed to standards and technology co-existence, IFS Applications 7 includes support for the latest J2EE and Microsoft .NET-based technologies, such as IBM WebSphere 6, Oracle AS 10g, and JBoss 4.0. The Java portal standard (JSR 168) is also supported, allowing companies to access and use IFS Applications directly from within the IBM WebSphere Portal.

At the conference, IFS unveiled Project Attila, a collaborative effort with Microsoft to develop smart clients for faster and easier access of IFS business applications from within Microsoft Office products. This is an initiative to enable better-informed business decisions resembling Project Mendocino (recently commercially renamed into Duet) pioneered by Microsoft with SAP. See Major Vendors Adapting to User Requirements. Like Duet, Attila should increase efficiency for information workers who use Microsoft Office as their primary working environment, allowing them to browse and update enterprise data directly within Office programs. The initial release of Attila, which includes reporting, budgeting, and financial planning modules integrated in Microsoft Excel, is scheduled to be available for select customers in the second half of 2006.

Most companies use Microsoft Excel in their budgeting and financial planning processes. With Attila, department managers and other workers involved with budgeting and planning should be able to retrieve, review, and update budget data directly through the familiar Excel interface. While working in Excel, they will benefit from the centralized storage, distribution, and revision handling provided by IFS Applications. The reporting module will make all business data stored in IFS Applications instantly available for processing and analysis in Excel, with the full security and access control of IFS Applications. Subsequent releases of Attila will include modules that integrate Microsoft Outlook with the workforce management and document management components of IFS Applications.

Attila will leverage the openness of the IFS component-based architecture, Microsoft .NET, and the upcoming Microsoft Visual Studio 2005, to provide a completely standards-based integration between the two applications, which will supposedly not require any additional setup or management. IFS and Microsoft have a history of cooperation, since more than half of IFS customers run IFS Applications on Microsoft Windows, while many IFS customers use other Microsoft products, such as BizTalk Server and Exchange, integrated with IFS Applications.
Incidentally, given the agnostic approach and partner-friendly nature of IFS, the vendor has in particular been more aggressively moving forward with a partnership strategy to further grow its business outside Europe (which is still its main breadwinning market, with the domestic Swedish market contributing about 15 percent of total revenues). To that end, while until only a few years ago IFS sold and implemented directly, in recent months partners have become responsible for one third of new license sales, with partners contributing 23 percent of total revenues in 2005. The main IFS hardware partners remain IBM and Hewlett-Packard (HP), while its main software platform and middleware partners are IBM, Oracle, Microsoft, and Sun Microsystems. As for software development and reselling, IFS partners fall into the following three categories:

* Industry partners, such as BAE Systems, General Dynamics, and Lockheed Martin in aerospace and defense (A&D), which have been useful resellers, after first being highly experienced users. To some customer-partners, such as BAE Systems, IFS has given board-level focus in co-developing the sharp vertical solution. Consequently, BAE now leverages its contacts and IFS expertise to dominate the UK government defense sector. Other high-profile partnerships include GE Engine Service for commercial aerospace.

* 2. Software resellers, such as NEC, which primarily covers channels in Japan, and provides assistance to the Chinese market as well, where many Japanese-owned companies have been setting up offshore operations. Early in 2004, IFS announced that NEC had even taken an equity position in IFS, and it now owns 10 percent or so of IFS stock, which should entice it to more directly invest in IFS functionality for the Asian market, and to expand IFS implementation capabilities therein.

* 3. Systems integrators, such as Accenture, IBM Global Services, and ATOS Origin for implementation services in opportunistic industry sub-verticals in selected countries.

Likewise, the vendor plans to repeat the model of developing global and local partnerships with well-known companies in niche industries in different countries (such as ABB, IBM, Det Norske Veritas, and so on) for expansion, while product development focuses on deepening its functionality to retain its position in its chosen markets, and also while broadening scope to capture some adjacent industries in the future. IFS also expects to offer more specialized best-of-breed solutions with the above partners, where appropriate. A perfect example is the alliance with ABB (also another equity partner) to deliver IFS Enterprise Asset Management (EAM) solutions, which could possibly make IFS a leading EAM player in the future.
Nowadays, IFS has around 2,200 customers and over 750,000 users, of which over 100,000 were added in the last 24 months. Existing customers appear confident in IFS, as 56 percent are on the last two product releases—IFS Applications 2003 and IFS Applications 2004 (with 26 and 30 percent respectively). Having seemingly turned the corner on its woes of the early 2000s, IFS has most recently embarked on a phase of sustainable growth, and to that end has appointed a new CEO, albeit this time from the UK ranks. Known for his sales and marketing execution savvy, Alastair Sorbie (who set up IFS in the UK in 1997, and who was subsequently in charge of the company's Europe, Middle East, and Africa [EMEA] operations), took over from Michael Halln (who has done a great job of setting a sound foundation) in early 2006. In EMEA, Sorbie presided over a string of successes, and had responsibility for 70 percent of IFS revenues and 40 percent of its staff, making it the company's largest division.

In addition to sticking to the strategy of the previous phase to continue the current focus on profitability through cost reduction and license growth with more partner business, IFS has also recently increased its investment in marketing to gain proper brand recognition and differentiation in its key industry verticals. These "IFS brand enhancers" are long overdue, given the vendor's protracted "industry's best-kept secret" status, and are needed to increase license revenue growth and make best use of shareholder value, while building on the vendor's rich history, identity, and employee pride. To the end of brand building, IFS has recently undertaken many new media campaigns (both in print and online—in the first half of 2006 it has already doubled media impressions over the whole of 2005) and significantly increased press release (PR) activities (both through dedicating its own staff and hiring the LEWIS PR agency for a consistent global message to the market).

For the future, the ongoing restructuring, which especially started in EMEA operations in 2005, will continue—transferring some responsibilities to other parts of the globe (in other words, back-office corporate functions like finance and administration and partner support will continue to be managed from Sweden, while front-office functions like sales and marketing are being increasingly moved to the UK and US), and readying the company for sustainable growth and support of its customers. IFS has announced organizational changes which will build on its SOA prowess—making its product development more agile and market-focused. Namely, IFS is merging the products organization and the industry and marketing teams under one leadership structure to increase agility in bringing the right products to market at the right time. Furthermore, IFS is creating a business development team to investigate innovative go-to-market strategies (with high-profile partners and customers to develop "fringe" functionality atop the IFS foundation layer and peddle the solutions afterwards) and business models, while internally IFS is also consolidating its corporate support functions.

After a few years of constant workforce shedding, IFS increased headcount in 2005 because it took control of subsidiaries in India (IFS recently bought its partner's share in India JV to turn it into a full subsidiary) and increased its direct presence in South Africa (with the focus on telecommunications and utilities), and it still has a strong interest in partnering (for example, with black empowerment concerns in South Africa such as Motswedi Technology Group). Emerging markets for IFS are China, India, Libya, Russia, Turkey, and Ukraine. While IFS Middle East has lately been the most profitable region, half of Asia-Pacific revenue is from China, where IFS is very strong in utility industries and has contracts with several top Chinese power plants, including Three Gorges. Another partnership in China is through a joint venture with Beijing IFS UFSoft. The vendor hopes to play a more important role in China, especially with local Chinese customers, and the product's flexibility should allow it to cope with the changing and demanding Chinese market.

Enterprise Applications Vendor Reverses Fortunes - But Will Perseverance and Agility Be Enough

Part One of the series Enterprise Applications Vendor Reverses Fortunes—But Will Perseverance and Agility Be Enough?

In a nutshell, IFS was the fastest growing enterprise resource planning (ERP) supplier in the mid-to-late 1990s. But the early 2000s marked a painful adjustment to slower growth and sometimes declining revenues. The vendor also had to deal with losses—some of which were whopping—for several years running, peaking at about $85 million (USD) on revenues of about $313 million (USD) for 2002. The IFS turnaround is thus impressive, since 2005 was the first profitable year in a long while (with profits of $13 million [USD] on total revenues of $288 million [USD]). The vendor also gained 10 percent in customers (including over 200 new customers), and over 15 percent of existing customers reportedly invested in new functionality during the year. IFS management attributes the success to its more focused sales strategy (including finding a tricky balance between not being seen as a niche provider and not being "all things to all people" either), increased organizational efficiencies (with costs and expenditures aligned with revenues), and continued emphasis on a selected, manageable number of vertical industries.
Before delving deeper into these current foundations of success, it might be useful to review the vendor's genesis. In general, IFS has typically found success when growing organically (for the most part) and by staying focused on midsized manufacturing enterprises. Technology Evaluation Centers (TEC) has frequently covered IFS since the late 1990s, when the vendor was undergoing a phase of rapid growth (especially in terms of new licenses) and global expansion. Add a good product, and one would have thought that nothing could ever go wrong with the vendor. The company's roots go back to 1983, when it was founded in Sweden, with software used to maintain assets for large utilities, and 1986 marked the launch of IFS Maintenance. After extending into manufacturing, distribution, and order entry and management, as parts of a more complete ERP suite in 1990, 1991 marked the company's geographic expansion into Norway, Finland, and Poland, whereas 1993 was marked by the first graphical user interface (GUI), and the opening of offices in Malaysia and Denmark.

In 1994, IFS pioneered component-based ERP software with IFS Applications, now in its seventh generation. Its component-based architecture has helped the vendor provide solutions that are typically easier to implement, run, and upgrade. It would be worthwhile to state at this point that IFS could be an object case of how a great product (in terms of functionality scope and technological foundation) and knowledgeable employees are only part of any wholesale success in the finicky enterprise applications market of today. For instance, back in 1994, IFS began a development project to transfer its flagship IFS Applications suite to object-oriented technology, which was completed in 1997 with the launch of the IFS Applications 1998 product suite. This was (and still is) in sharp contrast to the vast majority of competing enterprise applications, which remain largely on a monolithic client/server architecture and are in a midst of colossal efforts by vendors to move their spaghetti-like code-based applications to service-oriented architecture (SOA). The IFS business concept has since been to increase the "freedom of action" and competitiveness of user companies by enabling customers to either apply IFS solutions as a complete enterprise system, or as a complement to other vendors' applications within a specific part of the business process, which again is in tune with the SOA concepts of flexibility, agility, reuse, and so on. The main premise of SOA is to possess a number of individually developed, reusable software components (services) that can perform the functions of those applications (instead of having separate applications). Since many functions are common to many applications, services can supposedly be used in more than one setting, which in turn should reduce total development time and costs, and increase the agility of businesses. For more information, see Understanding SOA, Web Services, BPM, BPEL, and More and SOA as a Foundation for Applications and Infrastructure.
For over a decade, the cornerstone of the IFS strategy has thus been founded on its now proverbial component-based architecture with a well-rounded product footprint (to fit many manufacturing environments, including the mixed-mode or hybrid ones) and moderate vertical market focus. This has thereby become part of its identity, and a key ingredient in being able to deliver even deeper vertical industry functionality going forward. Also recognizing its scalability limitations (in addition to the rigidity of its erstwhile two-tier client/server architecture), in the mid-90s IFS embarked on creating an n-tier product architecture that would separate presentation, business logic, and data storage layers, and also render IFS independent from Oracle development tools and the use of stored procedures in the Oracle database.

IFS Applications 2001 was consequently heralded as a fully internet-enabled and componentized five-tier architecture suite, covering most traditional horizontal ERP functionality via a mandatory IFS Foundation layer, on top of which one can build (in a "pick and mix" manner) functional modules needed to satisfy the needs of more specific businesses. The architecture, which has been called Foundation1 since 2002, also allows new technologies and components to be swapped in and out of the technology stack relatively easily, without causing major distraction to the install base, and it also fosters an easier way for interfacing or integrating with other systems. Since 2002, IFS has also provided Web service access and Java 2 Enterprise Edition (J2EE) within its architecture. Namely, the original n-tier approach was based on older CORBA (Common Object Request Broker Architecture) technologies. As newer, better and standardized technologies became available (such as J2EE), IFS moved Foundation1 from CORBA to J2EE without significantly changing its application components. This fundamental technology shift demonstrated the value of this approach without impacting the core applications, and IFS customers received this technology uplift as part of a normal version upgrade.

Also, the IFS functionality has been split across over several dozen independent modules, which are actually coarser objects or components, and which can be implemented and upgraded separately from one another (this has been true for some time now). At their own convenient pace, companies can select modules to coexist with other legacy applications and databases, or simply avoid the "big bang" monolithic implementation approach that is increasingly being avoided as an unwieldy practice (for more information, see The �Joy' Of Enterprise Systems Implementations). Built-in extensible markup language (XML) messaging support and the external availability of all internal application programming interfaces (APIs) mean that integration between IFS components and other companies' software should be a reasonable endeavor. This layer of messaging via XML and Web services could in fact allow so-called "composite applications" to be assembled and deployed from multiple vendors. For more details, see IFS To Be At Customers' (Web) Service.

Further, owing to the component architecture, customers can, for example, install the latest version of a certain IFS component even while still using an older version of IFS Applications. And since the component architecture has been further enhanced within IFS Applications 2004 with the J2EE interface (dubbed IFS Service Oriented Component Architecture [SOCA]), thereby further basing IFS modules on open, commonly accepted standards, they should more readily be integrated into a company's existing information technology (IT) ecosystem. To that end, the IFS/Connect module allows any service or software component to be published as a Web service, transmitted via numerous protocols, integrated with messaging middleware products, or simply exported to a flat file. Designed for XML and the Web services concept, IFS/Connect also integrates with legacy systems, electronic data interchange (EDI) handlers, file import/export, and event notification. Moreover, while Foundation One is based on the commonly accepted open Internet standards, its design thinking is somewhat different, since IFS anticipated the need a while back to keep abreast of inevitable technology changes by incorporating the capability to add, change, or remove individual technology components on an ongoing basis and as required. To that end, Foundation One has already revised its web tier once and its mobile infrastructure layer twice.
One should note, though, that the aforementioned notable feats were built through the company's hefty research and development (R&D) investments and some modest acquisitions, especially throughout the ebullient dot-com era of the late 1990s, when not much attention was given to profitability. Around that time, in 1995, IFS opened offices in North America and Indonesia. Soon after, in 1996, it acquired Avalon Software, the US-based ERP provider, and in 1998 went public on the Swedish Stock Exchange, which, at the time, gave a false impression of almost unlimited capital investment. In 1997, IFS launched its web-based client (the predecessor of today's IFS/Collaborative Solutions which provides role-based portal views, which can be configured based on customers' unique requirements, so as to match the type of collaboration they desire), and expanded in the UK, Germany, France, Brazil, and Turkey. Hungary and Argentina followed in 1998.

In 1999, the vendor expanded into Greece and acquired US-based Effective Management Systems (EMS), as a way to move more aggressively into the North American market by getting a sales force team, a development team, and a local footprint. This has had only mixed success, since the acquired Time Critical Manufacturing (TCM) product line was discontinued, and only some of its customers migrated to the IFS product. IFS had hoped to convert its customer base from the maturing TCM product to its own modern enterprise applications, and consequently gain a quick US beachhead. However, customer satisfaction with TCM was very high, and customer loyalty therefore made it difficult to move customers away from it. With the majority of the TCM customers reluctant to make the transition, IFS then heavy-heartedly agreed to sell the TCM product line in November 2001, when IFS sold the business unit back to the original founder of EMS (Mike Dunham), who subsequently renamed the company WorkWise. Over 500 companies continue to use TCM to manage their businesses (see A User Centric WorkWise Customer Conference).

Transportation Management Systems: The Glue of the Supply Chain

Supply chains are becoming increasingly complex, and as manufacturers create a “value chain” that spans many countries, transportation of final goods or raw materials is a critical component to their business. If goods do not arrive at their destination on time, the manufacturing process will come to a halt and links within the supply chain will break, causing problems for other entities down the chain.

Along with this, the import and export of products is increasing, leading to greater movement of goods through distribution centers (DCs) and to a higher volume of products that need to be moved.

And then there is the issue of how companies deal with ever-increasing fuel costs.

It’s hard to imagine how companies can deal with the difficulties described above, but transportation management systems (TMSs) can do plenty to help manage the complexities of manufacturing today.

A Solution for the Complexities of Manufacturing

Given the growing need to move products inland and the increase in fuel prices, TMS software is a vital tool for today’s logistics industry, and the need for this enterprise application will only increase in the next five years.

As manufacturers’ supply chains continue to expand, developing networks and using different modes of transportation (truck, rail, air, and boat) can be quite a challenge. Networks and the use of these transportation modes need to be optimized. Otherwise, the following basic questions will be exceedingly difficult to answer, leading to serious visibility problems for the manufacturer:

* Where are the goods now?
* When and where are the goods to be shipped?
* What mode(s) of transport should be used to ship the goods?

So, what exactly is a TMS?

A TMS is designed to manage the different modes of transportation used to move products, whether finished or semi-finished. Transportation modes consist of ground, air, rail, and sea transport. A TMS determines the optimal path to transport products based on distance, location, and route.

The Anatomy of a TMS

A TMS’s basic functionality is comprised of the following:

Lane set-up: This has to do with multimode types of transportation. If moving a certain product requires three types of transportation methods (for example, rail, truck, then rail again), the system will be able to schedule all three of these means of transport.

Geographic set-up: This will link geographic locations together, as well as set up the service levels between different parties along the logistics chain.

Carrier and contract details: Whether the company using a TMS solution is outsourcing some of its transportation needs or managing transportation methods itself, the system will research the best rate of each carrier (transportation or logistics company), and select the carrier with the best price. Sometimes however, if a carrier has received three strikes against it (for example) for not delivering product on time, the system will not consider it an option, even if it offers the best price; the TMS solution will select another carrier that will fit the needs of the client, even if it’s more pricey to deliver. As well, sometimes the carrier with the best price is not in the appropriate range of location; thus, the TMS will not select it.
Transportation network optimization: This is one of the most critical components of a TMS. The TMS will define the following three aspects, each of which helps to manage the manufacturer’s “private fleet” (its fleet of transportation vehicles):

1. Strategic and master route design: This gives managers in charge of delivery the ability to decide the optimal route the driver of each vehicle can take, allowing expedient delivery of the product.
2. Territory design: This allows manufacturers to establish a standard route for regular, recurrent deliveries.
3. Routing and scheduling: This gives manufacturers the ability to optimize the schedules of all the modes of transportation it needs to use, as well as define the best routes possible. If a route is unavailable, the system, using global positioning system (GPS) technology, can determine another route.

A TMS can also offer advanced functionality, which does even more than the above. For example, a logistics company or a third party logistics (3PL) provider may have some stock to move out; the company loads the stock onto a truck and sends it off. This simple process may be good enough, but optimization has not been achieved at this point.

A TMS with advanced functionality is able to perform the following:

Cubing: This enables logistics managers to 1) maximize the amount of pallets or boxes that can be put into an enclosed space, be it a truck or an airplane, and 2) take into consideration heavy and light items. The data that is pulled from the warehouse management system (WMS) or enterprise resource planning (ERP) system gives managers the information on which pallets to load at the bottom so as not to damage any inventory.

These two capabilities allow logistics managers to assess how to save on fuel costs. The manager can compare if a load is better sent as one full truckload (i.e., at the maximum weight) with a discount, or sent as two less-than-truckload (LTL) shipments, which would save on fuel costs because the loads are lighter and require less fuel.

Advanced scheduling: Cubing ties into advanced scheduling and routing optimization. Advanced scheduling will take into account the above variables, and set out a path for the driver to take to deliver shipments 1) on time, and 2) in sequence, allowing the strategic placement of goods onto the vehicle.

Consolidation: This enables logistics providers to combine multiple loads, whether they come from one location or from multiple locations. It works with what is known as cross-docking and multi-stop pickups. If a logistics company wants to either cross-dock (the process where a shipment is unloaded at one DC and redistributed to other locations, whether retail locations, manufacturers, or other DCs), the system will figure out which items to consolidate with what load, and optimize the routing and cubing at the same time.

With the above functionality, each module within the TMS can be combined with any permutation so that different pickup and delivery models can be designed by the logistics company, and can be incorporated into the TMS. From this, the organization is able to optimize logistics processes, which are often very complex.
How a TMS Can Help Manufacturers: An Example

Let’s look at a concrete example to see how a TMS can help organizations optimize their entire value chain. Because the manufacturing environment is now global, this example will involve China, Canada, and Spain as parts of a manufacturer’s complete value chain. Consider the following scenario:

A manufacturer of cellular telephones has one DC in China, another DC in Vancouver, two manufacturing plants in China, one retail location in Vancouver, and another retail outlet in Spain. Thus, this manufacturer’s supply chain is highly complex for many of its suppliers and distributors to navigate. Figure 1 depicts the process, or flow, for moving product throughout this supply chain.


Figure 1. A value chain model.

Because products must be moved across many countries and via several transportation systems, three scenarios are possible:

1. The manufacturer will send the cell phones directly from the DC in China to the retail location in Spain. In this case, planes and trucks (air and ground methods) will be the chosen modes of transportation.
2. The manufacturer will have the cell phones moved through the DC in China to Canada (either by ocean or air), transported to the DC in Vancouver, and on to the retail location.
3. The cell phones are moved from China to the DC in Canada, but they are sent back to the China DC because of product defects. In this case, the DC in China sends the cell phones back to the manufacturer in China, which either repairs the defects or sends the phones back through its supply chain.

All three of these scenarios involve heavy TMS functionality. The scheduling and routing, cubing for the cell phone boxes, consolidation for cross-docking in different loading zones, dealing with geographic setups, and scheduling all types of transportation modes are critical to this process. If one of the DCs or manufacturing plants does not deliver its products on time, the TMS will adjust the scheduling or find an alternative route.

Because TMS solutions can optimize the loads put onto different vehicles or other modes of transportation, managers in this example can determine how much the fuel to deliver product to the different DCs will cost, and they can know when product is to be delivered and the optimal amount of goods to deliver to the appropriate location. The TMS incorporates the appropriate information into the routing optimization tool. This gives individuals within the supply chain the best information on how to save money and time by knowing what products to send and how to send them. This knowledge saves a company money on fuel and time, and ultimately increases the manufacturer’s bottom line.

An Analyst's View of Process Industry SMB Challenges

The process industry provides many of the products we use in our daily lives for food, shelter, and health. Such products are created as materials and transformed through the use of energy resources and chemical products. In addition, the process industry manufactures products that are essential to advanced industries such as computing, biotechnology, telecommunications, automotive, scientific, and space exploration.

These industries are facing major pressures not only to meet the present needs of our global economy, but also to do so without compromising future generations by ensuring that processes

* meet environmental guidelines
* optimize energy resources efficiently
* result in products that are safer, more reliable, and more functional
* provide features that meet both industry and consumers needs

This article focuses on how enterprise resource planning (ERP) vendors are helping the process industry meet both the needs of today and deliver on anticipated functional requirements that will help meet the needs of tomorrow.

Process Industry Manufacturing Challenges

Manufacturers in the process industry are at a difficult crossroads. Although the industry is not facing any imminent substantial decrease in its overall profit margins, there is concern in the industry according to a recent study by the Canadian Manufacturers and Exporters Association, which cites the following issues:

* increased global competition
* foreign currency fluctuation
* changing patterns of customer demand
* escalating business costs
* problems in implementing new technologies
* competitive business pressures
* shortage of skilled workers

To address these issues, process industry manufacturers and distributors must manage the following key activities, and ensure they use an enterprise system that supports these activities:

* Planning production for both materials and capacity—to develop a production plan, manufacturers must ensure that there are sufficient available resources and materials, production capacity, and labor.
* Inventory tracking and controlling work-in-process (WIP)—monitoring material consumption and tracking work order progress is the basis of manufacturers' being able to meet sales order, demand, and delivery dates.
* Replenishment and demand planning—the ability to review variances between forecasted and actual sales is the basis of managing vendor lead times and raw material replenishment.
* Managing the supply chain for order fulfillment—reviewing the global supply chain provides manufacturers with the ability to coordinate logistics and operational activity to meet customer order fulfillment expectations.

Specific Requirements of an ERP System for the Process Industry

Here's an overview of how some of the functionalities of an ERP system for process industries help manufacturers better perform the activities listed above.

1. Conversion process capability
In the process industry, the bill of materials (BOM) used in discrete manufacturing is replaced by the master product formula, or simply the formula. The formula requires a conversion table for measures, such as weights from grams to pounds, and must have the ability to record liquid units of measure, in both metric and US-standard. The formula must also record specific information related to product characteristics that can affect manufacturing processes. For example, in the blending process, the system can record product information such as percentage calculations of raw materials, and the effective specific gravity, potency, density, and number of reactives of those raw materials.

2. Interface to other modules
The master formula can also be linked to submodules like quality assurance (QA), procurement, inventory, and accounts payable (A/P) for government compliance and safety issues. Also, the manufacturer must be able to trace products in order to manage dating of inventory lot control and the amount of inventory available at the distribution level. Furthermore, there are government and regulatory concerns that deal with the nature of the materials, as there may be a controlled substance with specific shipping, handling, and storage regulations. Or, the manufacturing process may emit hazardous by-products. Or, there may be logistical concerns within the manufacturing process itself.
3. QA module and flexible formula adjustments
A process industry ERP system must also have a formulation-balancing operation based on the premise that the QA group tests random samplings of production batches. The system needs the ability to adjust, through a program logic control (PLC) interface, any variations in materials used and external factors such as humidity, temperature, cool-down speeds, etc. Also, the material flow and consumption is recorded back into the ERP system. The system's routing functionalities reflect those capabilities as a requirement or not, depending on the user's specifications.

4. Reworking all co-products and scrap materials
As a result of manufacturing processes, residual materials (by-products) may be created. These by-products can be collected as waste and reused. This is the case within the plastics industry, for which the collection and re-entry of materials into process creates very specific criteria. In the process industry, due to a continuous production flow operation, the production process generates a theoretical production yield, which may be calculated by the downstream packaging operation as units for case-pack quantities. The residual amount generated from the production process may vary within a percentage point, but in the downstream conversion process, the residual quantities may be aligned to complete full, case-size box quantities. By using flexible formulas, process ERP systems can demonstrate how the residual materials can be reworked from waste back into materials used in production.

5. Supply chain management (SCM)
Collaborative forecasting and planning are essential features of the process industry ERP system, especially for the automotive and consumer products industries. Some the most important functionalities include

*

visibility over inventory across the global supply chain
*

enterprise-wide planning in the areas of sales and marketing, procurement, and production
*

the ability to integrate planning for what-if scenarios
*

the ability to benchmark quality and vendor performance issues
*

detailed reporting that highlights areas where parameters may be out of scope
*

real-time available-to-promise (ATP) information for customer service

6. Process industry costing
The financial system for the process industry must also be able to provide for multiple-level formulas on the same production work order, and for outside processing at subcontract facilities. Given the nature of process industry products, most plants must operate on a continuous basis, which drives maintenance costs up. As a result, maintenance costs usually comprise 30 percent of a process industry plant's operating budget. Thus, an ERP system must integrate with some type of best-of-breed system to meet the requirements of the operation, and with some form of asset management system, which takes into account predictive and preventative maintenance.

ERP System Constraints in the Process Industry

For lack of an available solution designed for their needs, some process manufacturers have attempted to implement an ERP system for discrete manufacturing. As there are several fundamental differences between the operations and practices of process and discrete manufacturing, opting for such a stop-gap measure is not always effective. Process manufacturers have no doubt noted the constraints that are placed on their operations as a result of using a system that was not designed for their needs.

The nature of the process manufacturing business is such that it is difficult to manage inventories and profits. Process manufacturers experience large quantities of finished product in transit and of raw inventory. The products often have low yields with substantial scrap (fine chemicals, pharmaceuticals, or plastics).

Business dynamics is putting demands on ERP systems to help with

*

maintaining a lead over competition
*

simplifying the product lines
*

responding to shorter product life cycles
*

providing mass customizations (car options, computer system accessories, etc.)
*

complying with regulations compliances

In an attempt to meet these demands, many manufacturers have looked at ways to improve supply chain optimization by re-examining manufacturing processes, relocating closer to markets, and looking at cheaper energy, transportation, and labor. The businesses' needs are such that an ERP system must be powerful enough and diverse enough in functionality to do more than simple process manufacturing.
With ingenuity, many of the raw material manufacturers have turned to vertical market integration, moving from pure process manufacturing to mixed mode. Their factories now produce raw product for industry and sell finished goods by the item (counting). An example is toothpaste, where the finished good is sold by the pallet, case, or individual package. The ERP system must allow manufacturing processes to batch products in order to achieve product consistency (two examples are textiles, with "dye lots and finishing," and bakeries, with oven scheduling, and aerospace, with electroplating, etc.).

That some factors are out of the control of process manufacturing vendors is exemplified by the retail industry. In this industry, the vendor has a many-stop supply chain, and plays a role almost like that of the caboose at the end of a long train.

For example, chain stores track sales at the cash register, and use that information to replenish inventory from branch warehouses. The warehouses get their product from distributors. In the case of multilevel distribution networks, this explosion process percolates upward through the various levels from the retail store to regional warehouses (master warehouse, factory warehouse, etc.). The demand is input to the master production schedule at the level of the manufacturer. The process is not always real-time, meaning that a lot of product is out in the supply chain. This process of upward percolation is most common in the pharmaceutical and retail grocery industries. Since everyone in the supply chain strives to minimize and frequently turn inventory, any ERP system has to manage with these constraints.

As a side note, some manufacturers are trying to use real-time reporting to determine product consumption and demand. The information is more accurate and allows total reduction in the field, increased inventory turns, tailoring production to market preferences and better cash management.

Mixed-mode ERP systems are used by the processing industries for several reasons. First, there is no need to duplicate the data. For example, mature discrete ERP systems have well-optimized modules addressing finance, production planning, inventory management, sales, shipping, etc. The benefits of moving to a mixed-mode ERP product such as Syspro stem from the use of a common module to support production, sales, inventory, supply chain, finance, and analytics. These sophisticated discrete modules, adapted to accept process data, can go a long way toward helping the manufacturer reduce inventory, improve cash flow, and improve manufacturing yields.

The optimization of manufacturing and distribution processes for larger enterprises often involves business intelligence (BI) and business performance management (BPM) functions. These new ERP functions are typical of large manufacturers' systems, and are generally not affordable to SMBs.

Therefore, the SMB-oriented ERP system for process manufacturing needs to have extra capabilities that provide data for BI functionalities. The dynamics are such that this data is often industry-oriented (food versus chemical). ERP systems need to provide dashboards providing what-if scenarios to allow the manufacturer to improve competitiveness, while avoiding the cost of a full BI/BPM operational group.

Finally, a pure process ERP product has quantitative variables with large variations in values, leading to statistically large standard deviations. Statistical analysis for process optimization requires small standard deviations in order to make useful manufacturing recommendations. (Large standard deviations are indicative of large inventories in the pipeline, or variations in raw material quality.) Constraints on the quality of input data are essential to achieve any business improvements.

Following is a summary of constraint requirements of the ERP systems for process industries:

*

have sophisticated data conversion algorithms (liters, gallons, and weights of mixes), allowing packaging size variations to be accurately reflected in the calculation of production batch sizes
*

be real-time in execution on the production floor
*

quickly create a new production schedule from new orders, allowing for extra production runs of the same product
*

be responsive to changes in raw product concentrations
*

provide a dashboard that gives management real-time views of pertinent business processes
*

allow for varying manufacturing methods, such as continuous, make-to-order, make-to-forecast, and engineer-to-order
*

function equally well in discrete and process industry modes, with reliable software bridges between the two

Nine Ways to Use ERP to Make the Manufacturing Supply Chain Lean

Lean in a supply chain context is about a holistic view of procurement, manufacturing distribution, and sales order processing. This means that some level of enterprise technology is necessary to view the organization in an integrated context instead of as functional islands. However, before technology can facilitate the lean supply chain, manufacturing executives need to start thinking in lean supply chain terms. We will be reviewing those terms and sharing the key concepts that are the foundation for the lean effort. In short, we will discuss:

*

Four tips to help you bring lean supply chain improvements to your manufacturing operation.
*

Five technology tools that help automate these lean supply chain practices.

We'll use a few practical examples along the way to illustrate these concepts, but our main goal will be to define the specific things manufacturing executives can do to make their supply chain lean.

Tip One: Resolve Conflict Between Manufacturing Efficiency and Customer Service

One company that I have been involved with from a lean supply chain perspective, is a manufacturer of paints and coatings. After implementing their new enterprise application, they wanted to use the new-found visibility of their operations to implement a lean supply chain program. They found that before you can lean an organization, you have to have a good idea of what your current operations and processes really are. Then, once you know what you are doing, you can decide on process changes and then measure to what extent those changes have reduced non-value-added work.

What became immediately apparent is that like most manufacturers, this company faced the same conflict between the need to be efficient in production and the need to be responsive to customers. The manufacturing department tended to schedule for maximum efficiency by producing very large batches. This enabled efficient purchase or raw materials , maximum return on machine set-up time, manufacturing personnel, and other costs. Large batches are simply a good way to minimize the cost per produced unit. Low cost-per-unit is attractive, but it is in conflict with the goals of the sales organization. While manufacturing is rewarded for efficiency, the sales department is rewarded for serving the customer, which in turn leads to increased revenue and commissions. If manufacturing commits production capacity to large runs that are not immediately tied to customer demand, it might be difficult to meet the needs of customers as those needs change and fluctuate during the year. An item that is requested might not be in stock and may not be scheduled for production at a time when immediate capacity is committed due to the high-volume production schedule. Moreover, these large production runs mean that large amounts of capital are tied up in finished product inventory long before any revenue can be realized.

In the case of food and beverage and some other process industries, these large production runs can also result in spoilage as the effective life of raw materials and finished goods is spent sitting on the shelf.

The paint manufacturer did the obvious thing—decrease its batch sizes. Going forward, manufacturing would not be making unilateral decisions about batch sizes and production schedules, circumventing their natural tendency to focus on manufacturing efficiency. Instead, they would now be obliged to produce only enough to cover a certain period of time based on the sales projection. That means that every product will now be produced more frequently and in smaller batches. To encourage this behavior, other metrics and key performance indicators (KPIs) can be introduced that are more holistic and based on customer service levels and inventory turns, rather than just production output.

There are any number of formal disciplines designed to tie production in with sales forecasts. Sales and operations planning is one such method. By tying manufacturing schedules to sales projections that typically look a short distance into the future, you will be manufacturing what customers are actually asking for, improving customer service, and allowing greater responsiveness. Your inventory levels will decrease in proportion to your inventory turns. The higher the speed through the supply chain, the faster the inventory turns, and the less capital that will tied up in inventory at any given time. At the same time, the faster raw materials move through the supply chain, the less obsolescence you have and the less expired materials you have.

Tip Two: Extend Systems to Suppliers

Once the basic problem of aligning manufacturing schedules with demand is taken care of, the greatest bottleneck to improved supply chain efficiency is often the disconnect between internal scheduling processes and those of external suppliers. Companies that are vendors to major original equipment manufacturers (OEMs) know that large manufacturers are working to eliminate this bottleneck, and are often working with suppliers on a proactive basis to help them become more responsive.

Technology can play a vital role in eliminating this constraint. In the case of our paint and coatings manufacturer, one of their main bottlenecks was packaging. They outsourced printing on the cans their product was shipped in, which meant that the supplier cannot finalize their own can production schedule until they know the exact product numbers that will be filled. Their packaging actually took longer to produce than the manufacture of the product itself. Even though the cost of the packaging is low in comparison to the actual product to be filled, the scheduling of the packaging supply is one of the most critical and difficult parts of production planning.

The solution was to set up a supplier portal, so that the packaging vendor and other suppliers could look into the production plan and prepare their own schedule accordingly. This eliminated a lot of the manual and administrative work involved with interfacing with a supply chain partner in real time, removing all manual intervention and administrative delays. Portals of this nature can also provide a longer view of anticipated production so that vendors can manage their own inventories and plan their own capacity according to anticipated demand.



Figure 1. A supplier portal eliminates administrative waste and integrates the supply chain in real time.

Tip Three: Run Parallel MRP Processes

To clarify, this tip does not have as much to do with running parallel systems for manufacturing resources planning (MRP) as much as it is about delaying the commitment to manufacture to a point where demand is visible, known or certain.

Even as companies try to focus on the disciplines normally associated with the idea of a lean supply chain, this another fundamental paradigm shift that must take place within their organization, and it is often overlooked.


For instance, many companies operate with one single MRP process. Consider that company that manufactures in a make-to-stock (MTS) mode, whose executives feel that this single MTS enterprise system is adequate for their needs. This attitude is fine—if a manufacturer has a stable, predictable demand for all of its products. But in reality, few manufacturers have the luxury of flat demand. More often the rule of Pareto is applicable. This rule suggests that 20 percent of products have a stable demand, and you can manufacture them efficiently in large quantities. But the other 80 percent of a manufacturers' part numbers are ordered less frequently, and therefore need to be treated differently in the company's processes, systems, and scheduling. This is why virtually any MTS manufacturer should run in multiple manufacturing modes. Most MTS manufacturers would gain from a parallel make-to-order (MTO) system. This will avoid the stockpiling of a large number of items that are more effectively handled in MTO mode, freeing up both capital and production capacity for other products, all without sacrificing responsiveness or customer service. By continually analyzing demand patterns and inventory turns, the point of postponement can be changed over time to achieve the optimal balance between efficiency and responsiveness.

A modern, agile enterprise application will include all of the necessary tools to handle these multiple modes. Apart from ensuring that they have the proper enabling technology, manufacturers will need to carefully analyze the demand patterns for their finished goods and divide them into MTS and MTO.


Click to enlarge

Figure 2. Demand Planning—Minimize the forecast error and improve customer service without sacrificing inventory turns.

Tip Four: Master the Demand Forecasting Process

Too many companies fail to give demand forecasting the attention it deserves, and this is often the undoing of even the most aggressive lean supply chain project. If you think of a demand forecast, the more accurate it is, the better position you are to supply to the market with what the market needs. Conversely, basing a lean supply chain effort on an inaccurate demand forecast is like building a house on a foundation of sand.

If you can increase demand forecast accuracy, you can decrease your inventory and increase your level of customer service at the same time. Unfortunately, most companies divide responsibility for the demand forecast among multiple departments. Sometimes, the process is owned by the sales department, and they tend to be too optimistic, because optimism is in their nature and they want to avoid disappointing a customer by being out stock. At the end of the day, no one is specifically responsible for forecast accuracy, perhaps, in part, because the company lacks the proper tools to develop an accurate forecast. It is not surprising that, as a result, this crucial role tends to “fall between chairs” in many enterprises. It is imperative that a manufacturing enterprise assign demand forecasting to an independent party within the company that has more of a holistic point of view than would sales, manufacturing, or any other specific department.

Hosting Horrors!

The development team has spent 3 months night and day to get a beta version of your product ready for testing. The COO wants to know when the customer will be able to see the product. The Venture Capitalist wants to see results for the $3 million they've invested. You just want to get everyone off of your back for 10 minutes. This is not the time to hastily pick a Web Hosting Company.

As an ASP your application is not a standard off the shelf product. Months and possibly years have been spent perfecting how your application operates. ASPs are creating applications that a few years ago would only be found in a LAN environment. These applications have similar components to their LAN brethren, and operate in a similar fashion to their LAN counterparts. It's this uniqueness that can cause problems at the hosting level.
As simple as this statement is, you might be surprised by what actually goes into your application. Think about your needs: server capacity, storage space, multiple NICs and TCP/IP addresses, Firewalls, etc. Think about your application at the processes level and how the different components interact, look at how the different servers work together. Then draw the whole server farm out and document everything. This map and specification sheet will be your guide to the hosting company and a reference for everyone when things go wrong.

The map will also come in handy at the office. I posted a copy on the wall outside my office; it's amazing who refers to it. I knew development would use this map, but a few days ago I caught our Director of Marketing using the diagram to explain our product to someone.
As critical as it might seem to just get the site up and running, it is best to take your time and research the companies you would like to use. The best place to find out information about Hosting Companies is though their web sites. For obvious reasons, Web Hosting Companies will have plenty of information about their services on tap. The first thing to keep in mind is that this is marketing material; this information should be used as a starting point not as the final place to make your decision.

When you look at the different companies you'll notice similarities. Everyone has secured facilities; they all have backup power sources, and enough bandwidth to play the ultimate game of Half-life with 10,000 of your closest friends. Not to say these are not important issues, but they are so common they should be part of the standard configuration of any hosting facility.

Most hosting companies will have different levels of service depending on your needs. These can range from just rack space and a connection out to the Internet, to where the Hosting Company has full control of the boxes and gives you access on a limited basis.

Consider if you need to get into the box. Does your software need special access to the operating system? If so, then you would want to have root control so that you can make the security setting personally.

Once a box is built, do you need to change anything? If not, then consider using a lower level of service. This is more of a cost savings than anything else; why pay for more service than you need. Check to see if they have specialized products that fit your needs, remember the diagram? Usually if a hosting company has a special service, they are not going to hide it.

As soon as you have a short list of Hosting Companies call them and talk to a sales person. Do they know the company's products? Have they worked with an ASP before? What is their background? Do they work with a project manager? Don't be afraid to ask questions! This person is here for your benefit, not just the Hosting Company.

If the Hosting Company has a facility in your area ask for a tour. During the tour take a good look at the place. Is it well lit, clean; Are the cables in the rack run neatly? Look for consistency in the way the racks are setup. The condition of the facility will tell you a lot about the staff maintaining it. Talk to the system engineers. What are their backgrounds? Do they know your operating system? Are the engineers trained on the latest installation and maintenance procedures?
No, I'm not talking biblical prophecy here. What I mean is do you go with the big established Hosting Company, or the smaller newcomer to the industry? Being a good Web Hosting Company has nothing to do with size. Not to say that the established companies cannot meet your needs.

The best of the Hosting Companies will understand what your needs are, and can provide the resources you need. Remember the ASP industry is only 4 years old. It is still a young industry. We are all trying to figure out how to do things. Don't be afraid to look at what the newer Hosting Companies have to offer. It never hurts to look. You might find exactly what you need from a David.
Service Level Agreements are a way of life for the ASP. We make our customers sign them and we have to sign the Hosting Company's SLA. This is so everyone knows where the boundaries are. But, have you thought about how the hosting Companies SLA would affect your SLA?

I'll give you an example.

In your SLA it says that files can be restored from backup within 4 hours. A major customer calls; the file they downloaded off of your system 3 days ago is corrupt. Your customer has to have this document or it will cost them thousands of dollars. You say sure, not a problem and call the Hosting Company. They say, "Yes Mr. Smith, we will restore that file and we'll let you know as soon as it's done." The time passes and no file, one of your best customers is now threatening to sue. The Hosting Company is saying that the SLA that you signed sates that they will restore a file within 24 hours.

This is just one reason why you should make sure that the Hosting Company's SLA doesn't contradict your SLA. Make sure the Hosting Company can provide the services that you are promising your customers. If they can't, then either look for a Hosting Company that can or think about taking that service out of your SLA.

Saturday, October 3, 2009

Microsoft .NET Enablement: Analysis and Cautions

As discussed earlier in this series, to develop and deploy a Web service-connected information technology (IT) architecture, several elements are required: smart clients, servers to host Web services, development tools to create them, and applications to use them. The Microsoft .NET Framework includes these elements, as well as a worldwide network of more than 35,000 Microsoft Certified Partner organizations to provide any help users might need. For a definition of how the Microsoft .NET environment addresses the situation, see Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement.

Part Three of the series Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement.

For a general discussion of the evolution of system architecture, see Architecture Evolution: From Mainframes to Service-oriented Architecture.

Only a few innovative or brave (or both) Microsoft-centric vendors have embarked on a gut-wrenching (but potentially rewarding) effort to deliver brand new software written in managed .NET Framework-based code, where many of the basic system tasks are removed from the code and "managed" by the .NET Framework. This functionality, which has been completely rewritten, or newly created using only the .NET Framework, can then be used and accessed through Web services, as with the examples of .NET-enabled ("wrappered") counterpart software cited in Examples of Microsoft .NET Enablement.

However, if history helps us predict the future, it is awfully difficult to effectively execute this strategy of transforming software frameworks, and only the most resourceful or steadfast vendors are tipped as winners in the long run. Thus, one should be aware of how technology might develop in the future, while conducting the alignment of business and IT functions. Across the application life cycle, the high cost of development, support, and enhancements in terms of money, time, and quality limit the ability of installed legacy software to meet many demands of business. Other possible stumbling blocks might come from the facts that legacy functionality may not be accessible for modifications, and that such environments might require an additional layer of code to be developed and maintained for the wrapper (whereby the programmer must continue to manage system tasks). Also, this technology leap might not be positioned well for future Microsoft technology advances, such as Windows Vista.

Once other proprietary technologies are introduced into the research and development (R&D) equation, any vendor has to deal with translation, interface, and performance issues, not to mention the pain of migrating or keeping existing customers up to date, or maintaining multiple product versions. In fact, some laggard vendors even see the service-oriented architecture (SOA) and Web services bandwagon as an opportunity to portray legacy systems as "a treasure trove" of software assets. Although this might hold true for some applications where there is no business justification to "reinvent the wheel" (in other words, to duplicate what already exists, by developing a new payroll or general ledger system, for instance), users and vendors should make a rigorous effort to sort through this treasure trove and separate the "diamonds" from the "rhinestones." They will have to conduct a thorough discovery process to find out what functional assets they really have, and then make some tough decisions about what to keep, what to modernize, and what to throw away, since the code that is kept will form the foundation of a functional repository of services that will be used for years to come.

Many vendors, especially those with some longevity in the market (and even mainframe roots), like to create the belief that under SOA, no old code is bad old code. But the truth can be quite different. Some legacy systems have been around for forty years or more, and even though they may still be working, and doing something useful, not all of them are worth keeping as is. In many circumstances, companies can get away with wrappering as a temporary measure. Eventually though, and contrary to what many vendors say, both vendors and user enterprises will be forced to modernize and transform much of their legacy code. For more information, see Rewrite or Wrap-Around Old Software?.

To that end, in fact, Epicor Vantage is an example of an application that is essentially positioned between the .NET-enabled approach and the rewrite in pure .NET managed code (which is the next evolutionary step, as will be explained shortly). That is, around 60 percent of Vantage is in .NET-managed code (in other words, a C#-based smart client, extensibility tools, customization, and so on), and all business logic is exposed as Web services (not wrappered, but rather Web services generated from Progress OpenEdge). In the rewrite effort, Epicor recreated much of the business logic in a far more componentized and granular way, in order to support Web service calls. For instance, the checking capability to promise (CTP) call that Vantage users require cannot operate properly without the .NET Framework.

Certainly, the presence of Progress means that Vantage is not completely in .NET-managed code. However, with Vantage Epicor has not planned to be 100 percent .NET, but rather 100 percent SOA. The vendor has simply used .NET for the majority of the solution where it made sense, such as for client-side dynamic link library (DLL) management, and provided a standardized Epicor Portal platform based on the "latest and greatest" Microsoft SharePoint technologies (see Epicor To Give All Its Applications More Than A Pretty Facelift). When the vendor did not use .NET, it was to ensure choice and flexibility for customers on the operating system (OS) and database side. Namely, pure .NET throughout means only a Microsoft stack, whereas Epicor can support Microsoft and Linux/Unix OS, and Microsoft SQL Server, Progress, and Oracle databases (Oracle is not currently supported, but the intent is to support it in the future). This decision was important for supporting larger customers who rightly or wrongly maintain a perception that the Microsoft platforms cannot scale.

SYSPRO's design has also been along similar lines, and the SYSPRO Reporting Services (SRS), Analytics, and web applications mentioned in Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement were all written using .NET-managed code, whereas the SYSPRO Cyberstore's .NET-enabled capability is also featured in the SYSPRO BarCode and SYSPRO Warehousing Management System (WMS) systems, which are all integrated totally to the core enterprise resource planning (ERP) system via the .NET Framework.

However, .NET-managed software products are built entirely of homogenous .NET "managed code" components, meaning without any wrappers. In other words, managed .NET code is code that has its execution managed by a .NET virtual machine, such as the .NET Framework Common Language Runtime (CLR). Managed refers to a method of exchanging information between the program and the run-time environment, or to a "contract of cooperation" between natively executing code and the run time. This contract specifies that at any point of execution, the run time may stop an executing central processing unit (CPU) and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to run-time state, such as register or stack memory contents.

The necessary information is thereby encoded in a Microsoft Common Intermediate Language (MCIL or MSDIL) and associated metadata, or in symbolic information that describes all of the entry points and the constructs exposed in the MCIL (such as methods and properties) and their characteristics. The common language infrastructure (CLI) standard (whereby the CLR is the primary Microsoft commercial implementation) describes how the information is to be encoded, and programming languages that target the run time emit the correct encoding. All a developer has to know is that any of the languages that target the run time produce managed code emitted as portable executable (PE) files that contain MCIL and metadata.

As emphasized earlier on, there are many such languages to choose from, since there are more than thirty languages provided by third parties—everything from COBOL to Camel, in addition to Visual C#, J#, VB.NET, Jscript, and C++ from Microsoft. The CLR includes the Common Language System (CLS), which sets the rules and regulations for language syntax and semantics, as well as the Common Type System (CTS), which defines the data types that can be used. Because all programs use the common services in the CLR, no matter which language they were written in, such applications are said to use "managed code." In a Microsoft Windows environment, all other code has come to be known as "unmanaged code," whereas in non-Windows and mixed environments, managed code is sometimes used more generally to refer to any interpreted programming language.

Before the managed code is run, the MCIL is compiled into native executable (machine) code. Furthermore, since this compilation happens through the managed execution environment (or more correctly, by a just-in-time [JIT] compiler that knows how to target the managed execution environment), the managed execution environment can make guarantees about what the code is going to do. It can insert traps and appropriate garbage collection hooks, exception handling, type safety, array bounds and index checking, and so forth. For example, such a compiler makes sure to lay out stack frames and everything "just right," so that the garbage collector can run in the background on a separate thread, constantly walking the active call stack, finding all the roots, and chasing down all the live objects. In addition, because the MCIL has a notion of type safety, the execution engine will maintain the guarantee of type safety, eliminating a whole class of programming mistakes that often lead to security holes.

This is traditionally referred to as JIT compilation, although unlike most traditional JIT compilers, the file that holds the pseudo machine code that the virtual machine compiles into native machine code can also contain pre-compiled binaries for different native machines (such as x86 and PowerPC). This is similar in concept to the Apple Universal binary format, which is perceived by many as the system that "never crashes." Conversely, unmanaged executable files are basically a binary image, x86 code, loaded into memory, whereby the program counter gets put there, and that is the last the OS knows. There are protections in place around memory management and port I/O and so forth, but the system does not actually know what the application is doing. Therefore, it cannot make any guarantees about what happens when the application runs.

This means that .NET-managed software should benefit from the many performance and security advantages of .NET-managed code, since the CLR handles many of the basic tasks that were previously managed by a programmer in the application code, including security checks and memory management. .NET-managed products will also likely more smoothly run as "native code" on Microsoft Vista and future Microsoft OS and technology advances, providing another important advantage. Finally, .NET developers that have experience with managed code will confirm that this new programming paradigm allows them to develop and extend applications in record time and with significant improvement in quality. This is owing to the ability to create new "leaner" software with significantly fewer lines of code, which runs natively on .NET Framework.

Examples of Microsoft .NET Enablement

The Microsoft .NET environment includes what a business might need to develop and deploy a Web service-connected information technology (IT) architecture: smart clients, servers to host Web services, development tools to create them, applications to use them, and a worldwide network of more than 35,000 Microsoft Certified Partner organizations to provide any help users might need.

Part Two of the series Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement.

Most vendors have naturally chosen to evolve their existing application framework to meet the market needs detailed in Subtle (or Not-so-subtle) Nuances of Microsoft .NET Enablement. For a general discussion of the evolution of system architecture, see Architecture Evolution: Service-oriented Architecture versus Web Services.

When a compelling new technology does appear, it is quite common in the industry for an enterprise application provider to surround its old enterprise resource planning (ERP) or accounting core software in a "wrapper" of newer technology. The purpose of this is to effectively obfuscate the old technology, giving it the latest graphical look, or providing an easier means to access the core business logic and data from other, more modern systems and devices, or the Internet. Many ERP and accounting back-office systems in the market today were originally written in—and still contain—cores written in non-mainstream, or even antiquated technologies. Strategies employed to wrap older products include putting contemporary Windows graphical user interfaces (GUIs) (often referred to as "screen scrapers") or web browser-based user interfaces (UIs) on them. Lately, strategies have included providing new Web services layers to rejuvenate aged products by accessing the old business logic components and databases.

Evolving means a slower process where incremental changes are made to the existing architecture so that it eventually meets these demands. There are some good examples of .NET-enabled legacy software systems to which wrappers have been added to allow legacy functionality to be used and extended through Web services on the .NET Framework. In other words, at this more advanced level of .NET readiness, the legacy software system has a wrapper added which is a communication component created by an additional layer of code in the product. The wrapper is written in one of the .NET Framework languages, and by adding this wrapper, the legacy system functionality can be used through Web services. Other great advantages of this approach is that such systems run on the accepted current market definition of the .NET Framework, and allow fairly rapid enablement of legacy functionality.

One application that fits into the wrappering side of things is the Epicor Enterprise client/server product suite, whose business logic is exposed via .NET Web services. Most Epicor products generate Web services from business logic, or have business logic that simply is Web services (since the vendor has some heritage manufacturing products that will remain in traditional client/server mode). It is also interesting to note that there are no Web services in many of the Microsoft Dynamics ERP products as of yet. This is primarily because in some products, nothing is coded in (or requires) .NET for now, at least not in the core product (see Microsoft Keeps on Rounding up Its Business Solutions). Certainly, developers have an application program interface (API) to code in .NET around them or to extend them, but the core product is still largely Common Object Model (COM)-based. Even Epicor Enterprise might be in "better shape," since Epicor provides .NET-enablement via .NET extensible markup language (XML) Web service code wrappers.

A great example of a more advanced approach is that of SYSPRO. It goes beyond using wrappers, and aims to componentize the product so that its functionality can be used on any device or with any modern development language (including .NET languages). SYSPRO is a well-known developer of enterprise software for mid-market manufacturers and distributors (with about 12,000 licensed companies in more than 60 countries worldwide), and was one of the first software vendors to embrace the Microsoft .NET technology (see SYSPRO—Awaiting Positive IMPACT From Its Brand Unification). SYSPRO spent years developing its .NET Framework-based solution during the same time period of Microsoft's efforts to launch the .NET Framework technology commercially. Many of the building blocks of the SYSPRO solution were built initially on the beta releases of the commercially available Microsoft software. The company saw the .NET Framework as a way to add functionality and extend controls along the entire supply chain control, without the need for extensive programming or alterations to the core system.

SYSPRO introduced SYSPRO e.net solutions to expose the extensive SYSPRO functionality as business objects that can be used on any device or with any modern development language. This "componentization" was written from the ground up, to work seamlessly with XML, and .NET or COM environments. The SYSPRO business objects or components are "building blocks" that allow customers and developers to build Web services for customized solutions, or for seamless integration into third party products relatively quickly and easily. The business objects ensure that business logic, SYSPRO security, and data integrity are retained.

SYSPRO e.net solutions also form the foundation of the rewritten SYSPRO web applications that were developed using .NET technology and the SYSPRO business objects. The SYSPRO e.net solutions Web services, which form part of the web applications, deliver the core SYSPRO functionality to almost any client device across a variety of protocols in an integrated and cohesive manner. This enables applications and services that should provide manufacturers and distributors with new levels of functionality and flexibility.

A good example of the use of SYSPRO e.net solutions is the SYSPRO CyberStore offering, which is an e-commerce application that extends the concept of service-oriented architecture (SOA) to business-to-business (B2B) and business-to-consumer (B2C) trading. SYSPRO CyberStore offers online shopping 24x7 with near real-time inventory information and real-time pricing, and places the order directly into the SYSPRO ERP system using SYSPRO business objects and XML standards. For example, as a user navigates through the e-commerce site, different SYSPRO e-net solutions objects and services will be invoked to retrieve the relevant information. As a product is selected by a buyer, the inventory look-up business object is invoked to perform an online inventory check and fetch the latest image of the product from the back-end SYSPRO ERP system. The information is returned and rendered to the user, with the result being live inventory information provided to a potential e-commerce buyer. In addition, if the refresh button is selected a split second after a sales order is entered by the accounts department in the back-end ERP system, the revised inventory information will be displayed to the user on the e-commerce site. If the user purchases the item, the information relevant to the buyer and the payment method will be collected in the front-end e-commerce system, passed to a business object using the XML standard, and automatically processed in real time into the back-end SYSPRO ERP system.

SYSPRO e.net solutions provide a fairly cost-effective way for SYSPRO customers to integrate other best-of-breed applications, maximize B2B e-commerce trading, and leverage wireless connectivity, without compromising the business rules and security inherent in SYSPRO software. The XML standard and collaborative commerce tools like the Microsoft BizTalk Server and SYSPRO e.net solutions Document Flow Manager (DFM) enable systems to be more extensible and to collaborate with any other disparate or legacy system. As a result of the effective use of the .NET Framework, objects and services, and XML, independent systems can be set up to collaborate in real time despite their disparities.

In order to make the technology viable in the mid-market, SYSPRO embedded the aforementioned collaborative commerce engine into the core SYSPRO ERP software. The DFM automatically consumes and transmits XML transactions in real time by continually checking predetermined folders or e-mail addresses on a Microsoft Exchange Server for XML transactions. The XML transaction files are either e-mailed or transmitted via file transfer protocol (FTP). As the SYSPRO predefined XML transaction is identified by the DFM, it is automatically consumed by the module, which will in turn invoke a business object (business logic) to process the received transaction. The DFM module can also be configured to transmit the reply from the business object to an e-mail address or to another business object for further processing.

In environments where it may not be feasible to transact in real time directly through a Web service, the DFM module can be used to asynchronously process the transaction. The payment authorization can be processed in the front-end e-commerce system, and the relevant information placed in an XML file, which would then be transmitted back to the DFM and consumed using the same business object as if it were being processed using the Web service. In a situation where the back-end SYSPRO ERP system is offline for some reason, transactions are queued for processing by the DFM, and the module can initiate replies and transmit XML transactions back to the e-commerce system, and initiate e-mails or other proactive processes to increase efficiencies and the customer experience.

SYSPRO Expands Strategy

SYSPRO continues to expand its .NET Framework strategy with its latest product release, SYSPRO Version 6.0 Issue 010. The brand new SYSPRO Reporting Services (SRS) suite is written using .NET technology, and uses the business objects to render the reports seamlessly to an embedded Crystal Reports XI Server. SRS offers additional functionality such as archiving, scheduling, report customization, and various output methods. Issue 010 also sees the release of a new SYSPRO Analytics module (totally rewritten in .NET) which provides a solution for analyzing and dissecting data, enabling businesses to track trends, recognize and adapt to changes, and make informed decisions.

SYSPRO delivers information to users with a sophisticated analytics tool that is fairly easy to use, and does not require the technical knowledge of an online analytical processing (OLAP) developer. The new UI in Issue 010 has also been rewritten using cutting-edge GUI components, offering SYSPRO users easy personalization of their own screens, as well as easy customization enabling the use of Web services with VBScript associated with an unlimited number of user-defined fields. The customization can be deployed on a central or distributed basis. Electronic Signatures, also released in Issue 010, enables much more than just operator authentication when transactions are processed. The flexible design enables processes to be triggered based on user-defined criteria, facilitating enhanced business process controls that link to third party or custom programs, or to Web services with VBScript.

SYSPRO e.net solutions continue to provide a solid foundation that enables businesses to build or develop a service-oriented architecture (SOA). SOA concepts can simplify the reengineering of business processes, and provide a solid and obvious foundation for companies to respond more quickly to change. Business benefits from SOA should also include improved time-to-market, more responsive customer service, and increased visibility in the face of changing regulations, such as the US Sarbanes-Oxley Act (SOX) (see Using Business Intelligence Infrastructure to Ensure Compliancy with the Sarbanes-Oxley Act) and US Food and Drug Administration (FDA) requirements.

As a result of a major effort over a few years (about one year to develop the SOA tool set, and the rest to port the code), the new object-oriented SOA-based ERP systems were announced in late 2004. Featuring an n-tier architecture built with Microsoft .NET and Web services technology, both Vista 8.0 and Vantage 8.0 are architected from the ground up to support SOA, which should enable user businesses to leverage software services and components through open industry standards. In turn, this should simplify application-to-application (A2A) integration and supply chain connectivity. The new architecture exposes all functionality as Web services, which should make it easier to orchestrate business processes and workflows within the application in order to promote lean principles and continuous performance initiatives. It also promises new levels of application reliability, scalability, system interoperability, and flexibility, combined with a rich and personalized user experience.

Today, nearly 500 business objects across 30 modules (featuring thousands of business functions) are exposed as business services, meaning that customers should be able to extend the applications and develop integrations to other products. To that end, given the somewhat heterogeneous portfolio of Epicor products, the vendor announced Epicor Service Connect (released in fall 2005), a Web services-based business integration platform. This platform functions as the central integration point for implementing secure workflow orchestrations within Epicor applications and with third party applications to enhance collaboration and automate business processes. Harnessing the openness of XML Web services, Service Connect uses industry-wide standards and technology enabling businesses to deploy solutions now, with a degree of confidence that their investment will remain intact in the future. The SOA of many Epicor solutions enables Service Connect to transform or combine application processes to streamline processing within the application framework, whereby business components, represented as Web services outside of the application, can easily be accessed within Service Connect to eliminate non-value added steps and streamline basically any business process.

Designed to support both internal and external connectivity to Epicor solutions as well as to external applications or processes, Service Connect provides a straightforward solution for graphically mapping process flows and orchestrating transsactions. To that end, the tool uses visual workflows to map data to different formats, and create and assign tasks for human interaction, and uses "drag and drop" processes to call Web services, enabling non-programmers to build their own scenarios that interoperate with the application. For example, processing sales orders typically involves multiple steps including numerous availability inquiries, reviews, and inventory release decisions, all serving to extend order lead times. Service Connect enables users to eliminate many of these steps by creating orchestrations for routing processes to automated tasks, such as order-submit direct-to-pick for specific inventory items, or priority order fulfillment for a company's best customers, in turn improving order-to-delivery performance. By enabling non-programmer solution users to automate tasks and processes within the application, Service Connect also helps to promote lean principles, continuous performance initiatives, and Six Sigma quality, by providing a simple workflow orchestration tool to improve collaboration, velocity of information, and ultimately value chain performance.