Tuesday, October 15, 2013

Commercial API's and the Open and Closed Principle (hypotheticals galore)

Suppose you work for a company that markets a desktop software product that is worth 10,000 times its price in operational savings to your customers. Because you must contend with some worthy competitors, your product earns revenue from only 50% of your potential market (and it doesn't sell at a price that is anywhere near its value).
Now, further suppose that you, yourself, wrote a program that can control your company's product by making use of its exposed programming interface, and automates its key features such that it can perform millions of operations per day and run your company's product 24/7 with almost no human interaction. You soon realize that your product's customers would like to develop and run a similar program that can provide speed and cost savings in their own companies. Because your application is specific to your company's business and won't work for everyone you prepare a library that can be loaded and used by applications that your customers develop for themselves. With your library, their applications possess your application's access to your company's product, but tailored to their needs.

Here are a few function prototypes that your library might offer:

List<string> getImportantData(String whichStuff)

bool storeImportantData(String storeStuff, String where)

String getCalculatedResults(String workStuff)

That three method library is an Applications Programming Interface -- an API. Now, your customers not only purchase your company's product, but most of them will probably want your API library, too.

Modification 1:
So, for the first couple of years your company's sales numbers increase. Now, including your library, you're selling two products. Changes in technology occur, and you might soon be losing market share to one of your competitors because they are nearing the release of a product that offers the same features as yours, including those provided in your API library, except it can be run with an application that features the use of a different programming technology. Instead of driving a monolithic desktop program -- that offers way more features than any one customer needs -- it can load and exercise only the monolith's component subset that it actually uses. That can translate to higher processing speed, less use of diskspace, and lower product cost because it's unnecessary to purchase the unused features.

You certainly do not want to lose market share, so your company sets out to match the competition's new offering by developing new software that offers the same features included in your original desktop product, but now those features can be purchased and loaded on an as-needed basis. This will provide the same technology and benefits as your competitor's product.

You are quick to remind everyone that a significant share of your product's sales have been due to the value provided in your API library, and there is every reason to expect that that will be the case when customers use the new product architecture, too. So, you now want to provide an API library to the new product that provides the same benefits as the original. The new product offers a different technology, but the actions that need to be programmable consist of the same basis: Customers will still want to retrieve and store data, and will still need the calculations obtained from your original product. There are a couple of new features made available by the change to the new architecture, and we will be serving our customers best if you make them available through our API.  Some customers may desire to automate the object-based nature of the new product using their own client applications designed using the same object-based technology.  Because the objects they use in your new product are not anchored to the monolith as before, it will be necessary to provide a little more information for your library that is loaded in their client applications to obtain a connection with the main product.  So, we'll introduce a slightly different API set to handle this new option:

List<string> getImportantStuff(String whichStuff, long serverLocation)

bool storeImportantStuff(String storeStuff, String where, long serverLocation)

String getProcessedResults(String workStuff, long serverLocation)

Excellent!  You're now providing the same capabilities to your customers that they found so compelling in your original monolithic desktop product. You are saving them disk space and product expense as well as increasing their processing speed because their client applications are making use of refactored, faster calls.


------------------------------------------------------------------

When Modification 1 was in design it would've been a good idea to provide what would be seen as the same library to customers that you offered in your original product. It would be loaded by customer client applications the same as with your original product, and it would offer the same set of methods.  On its backside however, it would interface with your new object-based product instead of your monolithic desktop application. In the case of this Modification 1, it might become necessary to bridge the gap between product generations by presetting new configuration options on the system that allow earlier generation API calls to function in the new environment. With that, your customers could continue to make use of the client applications they had written over time.  They would not find it necessary to rewrite the client applications they'd perfected over the years just so they could get the same benefit from your new product they've already had with one they're already using.
From my time working in a developer support role, I am aware of several scenarios where customers cannot update their client applications.  Sometimes the subject-expert application author has left the company or the application was written by a short-term contractor. Sometimes companies have lost the source code for applications they run regularly as people left, hardware was updated, or roles changes. Sometimes a change in development technology such as the programming language or development environment makes editing and recompiling a challenge: "There isn't anyone here who does that anymore!"
When the new technology provides your customers with a business case for investment in writing new client applications, their development team could take up that task -- one which is much simpler and less expensive than duplicating what was already working in the old technology.

This is where a software engineering best practice, the Open and Closed Principle is suggested. It can be stated as, "an interface may be open for extension, but closed to change".  Once published, a functional interface must remain exactly the same -- no changes allowed. Development entities expend a lot of time and money writing applications to an interface, and if the interface were to be changed some proportion, if not all of that time and expense would be spent again to react to the changes. When the interface developer wants to add new capabilities that might require a change to the interface they instead add new member functions as extensions. With that, any legacy applications will continue to work as before with every new release of the interface library. The clearest and most common example of that is HTML verbs. The likelihood of making a change to the original set of HTML verbs is extremely remote since doing so would disable nearly all websites. Extending HTML's abilities without damage to preexisting work can be achieved through adding new extension verbs, like GetEx.
So, applying this wisdom to Modification 1, the new product could provide what, by nearly all indications is the original library, but one that also includes the new product's extension methods -- the 3 original methods together with the three new ones. For customers who intend to use only the new API methods and the new object based programming technology, the objects to which our API library talks on its back end will be directly available to their new object-based client applications.
With that, your customers are not forced into a situation where they must think about how much time will be required to rewrite their client automation applications. They can continue to enjoy the value of what they did in the past to run with your product, and not be forced into a situation where they wold be wise to look into whether or not your competitor might offer a less expensive or better way to get the job done. The cost arithmetic won't justify their moving to a competitor's product and changing all of their API client applications when they are currently running a suite of client applications that continue to work with every version of your product.

Modification 2:
After a company meeting you learn that your company has merged with another company that is your most successful direct competitor. After the merger your one larger company will have two large customer bases that are standardized on two similar products, but which of course have different API definitions. How should you handle this in order to retain the product loyalty of both customer sets? It seems inevitable that, rather than continue to develop two products that offer an almost exact feature match, one-half of your engineering resources will be retasked. Only one of the two previously separate vendors' products or a new hybrid product will be sold starting in the near future . As similarly asked during consideration of Modification 1, how do you retain the loyalty of customers who may not have used the surviving product or technology before the merger?
As mentioned above, in the future one of two approaches toward balancing the API problem are feasible. One, the company decides that one of the products will survive and the other will be phased out over the next couple of releases in order to provide that side of the customer base with migration time.  Or, you can help approximately one-half of the customers by either replacing their soon-to-be deprecated API client applications -- such as preparing a code compiler/translator -- or you can provide a new API library that provides the same interface as the sunset application's API but drives your surviving product.

------------------------------------------------------------------

If the surviving application was developed by your new step-company, then you will want to develop a new version of your standard API library that loads as before, and exposes the same methods as before, but which serves as an adapter to the new API.  With that your customers' client applications will continue to function as before, but with the new product. Again, no changes should be necessary on your customers' end other than installation and configuration of the new product.

If the management decision is instead that a new product is to be released that combines and replaces the offerings of each partner in the merger then, as you might guess, a new API library that exposes both your product's and your step-sibling's API's -- that will load and be used with no new customer effort. You can always extend the new library to offer features available in the new application or technology, but, for the reasons cited above in Modification 1, you must always preserve the customers' ability to continue use of the API client applications prepared and used in the past by offering an unchanged basis API.

Modification 3:
 Your development team learns that your company is preparing to develop and release a new type of system that is both similar and different to what you've been supporting up until now. The new system type is similar in that customers still desire the ability to automate sending, receiving, and receiving the results of calculations from the application -- a welcome acknowledgment to you and your development team that your API has been offering a solid basis for automating your technology.
When you think about preparing an API interface for this new product, consider whether or not your customers who have written client applications that automate your other product might be able to use those same applications with your new product variation. If you stick to the basis used so far, offering the same set of methods, would they work as written with no more than some configuration changes?

 Modification 4:
Customers and sales representatives continually see significant value in, or are coerced into, adjusting their operations so they can take advantage of newer technologies. For example, a product developed to run on a web browser can be quickly and inexpensively installed, distributed, and updated. Access security for such a system can be administered in a very general way. So, of course customers and sales reps are interested in what we can do for them in a mobile or cloud-based solution. As those questions are asked we are led to consider how we might benefit from knowledge of the open-closed rule when the product we will soon offer serves browsers from the web.

------------------------------------------------------------------

In the context of this discussion it is important to state that no matter the new technology obtained, customers will almost certainly continue to run their operation using the same type of system: Windows, OS/X, Linux, ...  While the new technology offers new features, for instance a JavaScript API to which customers or customers' server applications can write automation in their HTML forms, your first task is to protect your legacy customers by maintaining open-closed rule support providing an API library that your customers' client applications can load and use.The important fact is that you continue to provide automation access to your basis API so customers are never required to change their legacy applications.  The API library you provide to maintain support for legacy customer client apps in this new product might need to do some tricky stuff on its backend to be able to communicate the calls made into the standard library to the web-based objects, but you must not change the link between previous products to which your customers have standardized and your new product technology. As mentioned above, both technologies are running on the same system, where it is both possible and will provide the benefits of their workhorse applications to your customers.

Modification 5: In Modification 3 we were concerned with a jump to a web browser thin client offering.  You can also jump to a server-based web technology, which would require that the standard API library variation that you've provided for every product so far must now support the customers' legacy client application, but be able to communicate with the web server where your new server-based product is installed.

------------------------------------------------------------------


Modification 6:  By now, the pattern should be clear: With any version of your company's product, provide an API library with the same API interface exposed so your customers are always able to use their legacy client applications with your new generation product.  That would include a mobile phone or tablet-based offering.  It might involve a watch or a car, or some technology of which we are not yet aware.  The pattern is, preserve the basic interface, extend it as you wish, and sometimes consider the extensions a new part of the basis moving forward.

------------------------------------------------------------------

Conclusion: If you prepare an automation library that exposes the same API basis for every generation and variation of your product you save your customers from rewriting developed, tested, and proven applications. They do not need to pay for new planning, design, development, and testing of a new generation of automation applications. They already possess applications that they know and trust will deliver. When you release a new generation of your product that features the use of a new technology, smart customers will see that as a time when they should evaluate your competitors' products, too. Because the suite of automation applications that they need is already on your side of the balance, that evaluation period will likely be very short, and to your advantage.

When a potential customer evaluates your product, even without a suite of automation applications on the shelf based on your API, your reputation will help them realize that all applications they write to automate your product will be usable in all technologies supported by your products in the future.

When a company's development manager considers the design and implementation of automation applications that they will produce to drive your products they will write the suite with long term efficiency and quality in mind. They can write more efficient applications and application components because they know they can rely on your basis API as a given.

Wednesday, March 27, 2013

C#: GlobalAddressCompare With HTTPWebRequest()

A small project I provided a couple of years ago was for the benefit of the international QA team at Melissa Data. They wanted to automate the generation and comparison of their own street address formats returned from our product with the values returned by Google Maps API, Bing Maps API, and Address Doctor.



In an example run, the application whose UI is shown above reads a large list of street addresses from an tester-prepared input file and submits each of them via an C# HTTPWebRequest call to the Google and Bing Maps web services as well as the Address Doctor product that we run in -house. Upon receiving the web XML-formatted responses it filters them so only the relevant, comparable data remains -- presented for visual side-by-side comparison and then compares the results returned by the selected services,

Java/SQL: List Dropped Records

At Melissa Data we purchase many billions of personal contact records from multiple sources representing different facets of the world of commerce.  We combine them, eliminating duplication and obsolete information, much like the goal of database normalization, to distill them into the richest, fullest set of records anywhere -- baked fresh in three week cycles.  Companies who were once our competitors now submit their records to us so they can be both, updated to contain the most current information, and appended to fill in their blanks.

The processing of these records is done in a sequence of phases, starting with a simple conversion from the source formats to our standard. Through our software build process each address acts like a magnet that attracts all of the data associated with it from the batch of original records.  We start with a set of as many as twenty billion records and end up with around one billion product records.

We sometimes encounter a large, but imperceptible loss of records during the transition from one build phase to another after we've made changes to the code that controls the build process. It is not uncommon to find we've lost around 400 million records between steps -- and not realize it unless something unexpected pops into view because that's less than 2% of the data involved. 

The application I wrote, known as "List Dropped Records" tracks every record, whether it has been reduced to an archive or remains active, in every phase. It opens the hundreds of thousands of files, reads the record lists therein, and compares the phases in order to report what has disappeared in the form of a database table that lists the record ID's dropped in each phase.  With that we can quickly learn whether or not we have loss, and how the amount of loss changes from one build to another.

Friday, March 15, 2013

C#/Custom User Controls: AtomSet Utilities

Overview: In a project whose goal is to convert a huge set of international street addresses obtained from hundreds of different sources to the format that is standard for its country, the code we develop at Melissa Data's executes a succession of processing steps. We want to see the effects of our code at each step to ensure it achieves what we expect or what else if not.  The AtomSetUtilities application is the solution I prepared to address this need.

There is an internal class that is the data storage unit for an international address in the sequence of processing steps, AtomSet.  Its data is stored in the raw format in which it was received from our sources in a normalized database, and assigned to the class members during class construction.  Because it concerns approximately 240 different national address formats, the definition of an AtomSet will sometimes change as development work continues, or because national formats vary. Also, the data it contains will require a variable number of entries in its components, so a viewer application must read and present each AtomSet in a dynamic way.

Because there are actually nine different steps in the execution sequence that we want to view I wrote a C# user control that I call the AtomSetViewerUserControl that can be dropped onto a UI area and loaded with whatever dataset is now required. The screenshot below shows one of these user controls containing three dynamically-placed text boxes on the AtomSetUtilities' configuration property page.



A particularly interesting feature that I've added, that was not in the original requirements, but which I thought would be helpful for communications between QA and Development is the ability to record and reload a Snapshot.  A Snapshot is a set of data values that are in the AtomSetUtilities' UI at some instant that have been serialized so that running state can be reloaded later.

Here is another property page with evidence of progress after some steps have completed:


Because I have responsibility for the development of 21 different development utility applications that concern similar subject matter I saw several violations of one of my favorite software design rules, the Once and Only Once rule.  That is stated from the hip as, "a rule should be coded once and only once, and any duplication should be eliminated by using a method that can be called from all places where the duplication was".  As I have developed the 21 applications I have prepared and efficiently made use of 23 libraries, among which are these: 1) the viewer described above; 2) the AtomSet used in all 21 applications; 3) a class definition that is displayed in a combobox; 4) a customized file class; 5) A custom file-open dialog containing an MRU list; 6) a international character culture translator; and, 7) a configuration serialization class.

Sunday, March 10, 2013

C#: KML Generator

Project overview: We want to show high quality competitive analysis, marketing, and project progress overviews in an easily understood geographical view. 

I studied the GIS tutorials and reverse-engineered some KML examples to write the KMLGenerator application which produces two and three dimensional overlays on Google Maps and Google Earth. The original request for an application that could render data geographically was for a color overlay, but as I explored the GIS methods I found that extruded features did a better job of conveying magnitude and making memories and so added several additional rendering and color modes.

The app's UI looks like this:


It accepts very easy to understand user-prepared Excel csv files with the location, data categories, and data values and another file I prepared that contains the geocodes for the centroid and perimeter coordinates of geographic entities such as countries and states. It generates output conveying numeric data values in an Earth map that looks like this:

and more of the same in a United States view:


Street Segments
Another project concerned showing street segments where the street is known by a different name in different locations so we could visualize what we were reading in the data. For instance, Pacific Highway South, Aurora Ave, Highway 99, and Evergreen Way are all different names for Washington State Highway 99 between SeaTac and Everett, WA. The map below shows sections of a road that are known by different names using a different color for each  different name.  A popup provides additional information such as the local street name and the address range.



I wrote this application so it would read and process a large data file and produce a different Google Earth folder for each viewpoint. Doing so allows the person who is reviewing street names for preferred values within a geocode range to run the entire list once and visit the folders, the content of which is one road, as they have time.

Tuesday, January 8, 2013

Java/SQL: Record Indexer

At Melissa Data we purchase large mailing lists from several different facets of the business world and distill them into a form our customers from many business perspectives easily query to gain valuable information. We start with 15-20 billion records from our suppliers and combine them, eliminating redundancy and outdated information until we end up with a much smaller and fuller set of customer contact records. 

The size of the data set just mentioned makes it very difficult to realize the effect of every rule we code into the software distillation process that converts purchased data into our product. Each small error or misunderstood user story in our code might obscure or destroy a lot of value in the final product. We've seen more than 100 million records lost -- unrealized -- several times due to a minor oversight in the code involved at some distillation stage.  Because the size of the output at each phase is so large, the time necessary to properly review it would deadlock our development efforts and we need a way to quickly locate the text of suspicious records as well as that of their ancestors at every distillation stage.

At the origin of the record distillation process we've purchased thousands of raw text files that, depending on the source, may each contain between hundreds of thousands and millions of records. As the data is processed in four distinct stages including its raw state, two distillation steps, and the final product, all data from all original files remains present at each level. To most efficiently review and test the effects of our software at each development stage we need access to the records' text at all four distillation phases.

The solution we're using for this is a SQLite database named IndexID.db.  It contains two simple tables, the first of which, "recordsIndex", contains three simple fields: a record ID, a file ID, and a file offset.  The second table, "filesIndex", consists of a file ID and the fully-qualified path to the file.  When a developer --or better yet, a program -- seeks to quickly see the contents of a record, the record's ID value is used to locate and read the record text. 

After four paragraphs that tell what is involved, we finally start talking about what I think is the interesting part -- the application that creates and loads the IndexID database, the IdIndexer.  Every record formatted for our system contains a field that holds a list of the record's ancestor record ID's -- ID's of the records whose data found its way into the record. As each stage of the distillation process finishes, every single bit of the original data can be found within one of the records, but always in one with a different ID from the previous or next phase. The results of all build phases are saved to files in their own directory tree. The IdIndexer opens every file in the build directory tree, from which it locates and lists every path, filename, record ID, list of ancestor record ID's, and offset to each record in the files. 

The amount of disk space a database such as IndexID.db consumes is proportional to the number of characters stored in its field values.  Because the number of records is so great in IndexID.db, many characters and digits are required to specify their record ID.  The syntax for our record ID's uses 40 alphanumeric characters that are grouped by three dashes.  The amount of database storage space necessary for a bunch of 40-char ID's alone is 40 times the number of records -- which in our case is around 25 billion -- so we are talking about needing around one trillion bytes.  To minimize the amount of storage for the record ID's I took two approaches, the first of which was to convert the last quarter of the ID from a ten digit string of base-10 numerals to the base-36 equivalent, which is normally a change from ten to around three digits.

In the case of the other three ID segments, as mentioned above, the first contained only four characters, and they spelled one of a very small number of strings. The second and third segments each contain around ten alphanumeric characters -- which happily increment from zero to some alphanumeric value, and therefore contain lots of zeros at the front end.  It was quickly obvious that there was a lot of duplication involved in all three seqments.  What I did is map each seqment's value to a base-36 number that was incremented each time a new segment value was encountered.  So, for instance, the first segment was converted to a numeral 1 in place of the original "ABCD". In the second and third segments a string like, "000001R0469", was converted to a base-36 number like 124 because "000001R0469" was the 48096th unique string encounterd, which translates to 124 in base 36. With those translations to the four segments we now store the record ID's for the earlier mentioned 25 billion records in approximately 8 digits instead of the original 40, which reduces disk space used from one trillion bytes to closer to 200 billion.