It's really the design center. In a mainframe environment, everything is tightly controlled and managed. They don't...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
call it the "glass house" for nothing. In the old days developers could literally not get into the same room with the computer. At one of the first jobs I had after college, the developers sat in cubicles filling out coding sheets that were given to punchcard machine operators. The card decks were passed into the computer room and we waited for our printouts to come back out so we could debug the code.
This mentality of control extended to the TP software systems the vendors developed in those days. Developers were not trusted to correctly figure out how to begin and terminate transactions, so it was always automatic. And every data resource on the mainframe was always accessed within a transaction, at least by default (in some cases you could override this). In the early days of TP standardization, at the ISO TP and X/Open DTP committees in fact, this was a big debate topic – how much (if any) control should the developer be allowed? Some of the first TP products to allow a high level of developer control over transactions were Tuxedo and Encina, both of which included APIs specifically for controlling the behavior of transactions. Some of this made it into OTS/JTS and through that into Java EE, but for the most part the transaction control portions of the Java Transaction API remain restricted from typical developer use.
In other words, as the Java and .NET environments became popular for transaction processing, the assumptions about developer access to transaction control APIs changed a bit. You still see a default of system control over transactions through the use of annotations and attributes, but you also find explicit APIs if you look for them.
The reason for the debate is that it is pretty easy for a developer to make a mistake in controlling a transaction – to forget a rollback exception, to hold locks too long, to forget to resolve the status of all participants in a distributed transaction, and so on. And when mistakes like this happen, data consistency can be broken, which often means systems shutdown until the data can be fixed (often by hand). The real benefit of a transaction is its ability to automatically recover the system to a known state when failures occur so that applications can automatically restart without worrying about data inconsistencies caused by partial updates.
The sort of funny thing about the Java and .NET transaction processing environments is that they inherit many of the design center assumptions from the mainframe environment. I think this is largely because Java and .NET TP capabilities were designed, at least in part, to move applications off of mainframes onto commodity hardware and open systems. To do this they had to replicate the features and functions that mainframe applications relied upon. However, neither Windows nor UNIX operating system environments were typically as controlled or managed as mainframes were, especially not when it comes to developer access.
So we have ended up with a strange world in which Java and .NET TP environments tried to replicate mainframe TP environment features and functions, and yet inherently are not really able to since the systems Java and .NET environments run on are not as tightly controlled – the level of control over the system being one of the major assumptions on which TP system software is designed.
I think this is one of the big reasons distributed transactions have gotten a kind of bad name in Java and .NET environments. Another reason is that the early TP Java and .NET environments provided automatic transaction management attributes that always started a distributed transaction, whether one was needed or not (i.e. whether or not the transaction involved multiple data resources – if so, you need a distributed transaction. The majority of TP applications, though, use a single database). Distributed transactions have more overhead, by definition, than single resource transactions, which would affect performance and scalability for those single DB transactions. Both environments have solutions for this now, although the new .NET TP environment, based on system.transactions, has a feature the Java TP environment doesn't have. This feature allows a single resource transaction to automatically be promoted to a distributed transaction whenever the application accesses a second resource.
Today we are seeing a real trend toward asynchronous communication protocols and loosely-coupled SOA and REST/HTTP based environments, so the mainframe style TP environments that the early .NET and Java TP environments sought to replicate don't actually fit very well. In some ways this is "back to the future" – the application now has to do more work to ensure transactional consistency and automatic recovery from failure.
Dig Deeper on Legacy application modernization
Related Q&A from Eric Newcomer
ACID is used to summarize the basic properties of a transaction in the database sense of the word, not the logical "business" transaction sense. On ...continue reading
Overall it has the feeling of a large set of enhancements to a variety of products in response to various customer requests. Oracle can finally say ...continue reading
Are there any rules of thumb that would make one decide to do in-process interop versus Web services interop based on throughput requirements?continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.