• Hundreds and hundreds of changes.
• Complex interactions of web pages with database
records.
• Minor changes needed almost daily to graphics,
buttons, the language of messages, the contents of forms.
• Changes are made constantly as I try out a page, see
the result, go back for another try.
• I might change a particular items dozens of times in
a single hour.
• Billions and billions of changes.
• They are constantly selling, selling, selling, and
supplying, supplying, supplying.
• No sooner do I have inventory consistent than
somebody buys more, or drivers deliver more to sell.
• I might change an item thousands of times in an hour.
The web developer.
Its ironic. The one with the slowest pace of
change feels the most flustered.
Of course this may be because the developer
drinks more Java (or perhaps Jolt) than the average database. But it might also
be because the database is much more automated than the developer.
The database, once it is trusted, just makes
many small changes all on its own. It seldom replaces one inventory system with
another, perhaps just continuing to patch things up, because it knows at all
costs that the company must always remain a "going concern". Not
perfect, just going.
No one needs to approve a change in an
inventory record through a formal process. The database is tested, and trusted,
to change these all on its own, and in fact often becomes the final arbiter of
what the proper values in the records are. So if an order comes in for 1000
widgets they are dispatched, the quantity-on-hand record is decreased by 1000,
perhaps 1000 widgets are added to a queue for a supply chain, and financial
records are updated.
Hopefully someone needs to approve a change
to a web page. At the delivery end is a customer, and these are very fragile
creatures, most definitely not part of our system, and not much amenable to
testing. They can just go somewhere else. In order to keep them coming back to
our site, as distinguished from another site, we most assuredly do not want
them to encounter missing links (say a product photograph not there),
badly-formatted pages (say a missing end tag), or colors that turn people away,
maybe only when rendered in a particular browser that some of our customers
stubbornly continue to use.
Hopefully the addition of a new page to an
actual site goes through a staging process, which involves testing with various
browsers and running through its options, before it is actually deployed.
The point is that the two sides of this
operation, presentation and data, not only exist for different purposes and
have different uses and users, they have very different time frames.
Now lets look at the technology that
reconciles these two ends. That's right, the business part of the tier. Don't
ever sell this part short, it is the part that makes sure your company gets the business, that the
delivered presentation keeps the customers there, the delivered data is
appropriate and accurate, and delivered updates keep the database in a
consistent state.
The person in charge here is the “middie”.
• Making a change to improve the presentation means you
are taking a chance on fouling up the data.
• Making a change in processing the data means you are
taking a chance on fouling up the presentation.
Separate presentation and data, and leave
things up to the business tier to reconcile the two. Doing a good job of this
is in large part what gives us a unique niche in the world as this particular business enterprise.
It doesn't seem dynamic.
• There are too many things to do, and it is too
clumsy, when the presentation is changed, to deal with the business developers,
who don't appreciate the subleties of real graphic design in any case. The
business developers should let me
control the changes.
• My database records are the only important thing, the
rest of it is "just fluff". If records are not correct and usable by
accounting our business will soon fail. The business developers should let me control the changes.
Sounds like “webbies” versus “techies”. Of
course some of us are both, but often at different times of the day (and
especially the night). Both are highly legitimate professions, but it is hard
to practice them both at the same time. The immediate goals are not the same.
Perhaps we connect using templates, combining
code from both parties, such as Active Server Pages , Cold Fusion Markup , Java
Server Pages.
This separates the
parties.
The techie provides components such as Java
Beans , or Active Server Objects .
The webbie just uses the components where
data needs to be inserted into a page.
One object, the "active page", is
dynamic and readily changed. This object in the case of Java Server Pages can
be compiled into a servlet very easily. This is nicely dynamic, and perhaps the
first time it is actually used it is compiled.
The webbie doesn't get a really clean view of
what is being constructed, the page is all filled with strange tags that really
have meaning only to the techie who constructed them.
The techie doesn't get a really clean view of
the flow of information between the various components, and has to work
overtime to see that the webbie cannot direct this flow in disastrous ways.
And all this was done in the name of 'being
dynamic'.
No one really just edits an 'active page' on
an active site, and lets the server just compile it and install it. At least
those who do stand a large chance of losing customers who don't like the
results, if anything strange occurs in the new flow of information.
The web developer certainly perceives a need
for dynamism, on the scale of dozens of times an hour, because there is a need
to constantly test and refine.
That need is not coupled with a need in the
beginning for dynamic data. In fact quite the contrary need is present: the
need to test with the same data over and over again in order to see the effect
of each change in the page.
Later perhaps the page can be exposed to real
dynamic data, in the staging phase, to learn more about its 'behavior under
load'. But even here if real benchmarking is to be done, there must be accurate
information about the sizes of the data items involve, and care must be taken
to distinguish the behavior attributable to the page and the behavior
attributable to the data servers.
What approaches might encourage the
separation of presentation from data, while maintaining the ability of each
party to control changes in their domain?
• The webbie creates real web pages, that have real
forms to submit data, that will be coupled in the business tier with something
to validate the data entered. Perhaps the validation will be done by scripts on
the client side, but those scripts are supplied by the middie.
• The techie supplies data access objects, and helps
the middie create validators for them. These validators may be almost the same
objects as the beans that might have been used on 'active pages', but their
behavior is determined by the middie, and they are not interconnected by the
webbie.
• The middie would like to deal in a highly organized
and automated way with real object-oriented code.
Each page supplied by a webbie compiles into
an object following a very formal model, usually some version of a Document
Object Model that is part of the W3C standards for XML. This will be a Java
(what else) object which encapsulates in a standard way the contents of the
page, and allows it to be manipulated quite formally and even automatically.
Manipulate
is the key phrase. The middie uses data objects provided by the techie
(following typically JDBC or JDO or something similar) and connects them with
the document objects compiled by XMLC from pages supplied by the webbie.
The connection is done in a manipulator
program, probably a Java servlet with accompanying utility classes.
What is needed to see that these things come
together, and remain together, through dynamic changes on the webbie scale, and
dynamic changes in the actual data (the techie has gone on to other projects)?
Didn't Java teach us that 'the interface is
the contract'?
The additional touch is that XMLC formalizes
interfaces to the presentation and to the data, making sure that interoperation
is preserved. This is not a complex process, and does not require years of
person-hours to do. It is a straightforward process, based on what we already
have done in most cases, not much different from the long-standing practice of
using a 'data dictionary', perhaps now coupled with a 'presentation dictionary'
describing the kinds of information that can be presented, but not describing
how that presentation is done.
Use the "plain old" ID field that
is part of the DOM, and even a less well-known part of HTML as it exists today.
The document standards already say that such
an ID should be unique, and XMLC provides methods in its API for manipulating
objects using that ID. It also provides methods for creating new elements and
modifying old ones, and these include ways of handling attributes, all based
upon that ID.
When the webbie modifies an HTML page, the ID
may end up attached to a different type of element, which will present the
involved information in a different manner. Thus it might be inside a the
<TD> of a cell in a table, or the <LI> or , <DT> or ,
<DD> of a list, or sitting in its own <P> paragraph.
When the XMLC compiler runs it makes a
representation of the element part of the object that represents the page,
using the ID we had all agreed would identify the information involved. The
manipulator program will not need to change in any way to accomodate a change
in presentation.
When the middie needs to change the way
information is put together, the manipulator program can be changed. If there
is nothing new to present, the page need not be touched.
And when the data changes, hopefully if business
is good many thousands of times per hour, no one needs to do anything except
appreciate the folks in the back room, now abandoned for a while by the techie,
who keep it running.
No more frazzled nerves.
Lets spend our time developing
components that cooperate with one another, and systems that respect their
independence which helping them work together.