Having worked in the file transfer and data exchange markets for nearly two decades, one thing which has always struck me is the lack of management and governance capabilities which exist – particularly when one of its three letters (the “M” in MFT) references it.
Over the years, the managed file transfer industry has been focused on the file transfer aspect. Assuming the management element to be synonymous with centralization and control, simply having an administration console is about as much as most vendors have sought to take it. In many cases, accepting that third parties can make up for any gaps.
But is this the right strategy?
File transfer solutions have come to provide a critical function for many businesses, ensuring retail shelves are full, keeping flights running on time, or even enabling us access to healthcare in a timely manner.
At Axway, we estimate that we help our automotive manufacturing customers to produce over 6.5 million vehicles per year. It comes as no surprise that our customers describe themselves as operationally reliant on the health of their managed file transfer solutions.
So, how do we rationalize these two opposing forces: the need for high levels of operational health vs. the lack of capabilities to ensure it? Here some areas to think about.
The visibility of MFT system health and dependencies
I always think of file transfer solution health existing in two areas:
- the MFT application itself
- the health of any dependencies
Some MFT vendors do offer limited capabilities for monitoring the MFT application – usually within the administration console of the solution itself. There may even be some capabilities associated with clearing or starting the queues or services which make up the application.
As useful as these controls are, they tend to be reactive, rarely offering any warnings or proactive actions to maintain service levels.
Dependencies can be anything which the MFT application requires to operate. Examples include the host operating system, a database, a file store, maybe a centralized authentication source. MFT solutions do not operate in isolation and often their speed and responsiveness are tied to that of their dependencies.
For example, if an MFT operation is reliant on the completion of a SQL query, then the speed of response from the database is critical to the speed of the MFT solution completing that operation.
Sadly, however, most managed file transfer applications surrender the visibility and monitoring of dependencies to the customer, in what is a delineation of responsibility.
Now, imagine having to manage hundreds of MFT applications/agents throughout a network and being limited to a localized administration console or worse, having no real visibility capabilities at all.
Risks include the following:
- Congestion or an outage in a shared dependency could result in sub-optimal performance of the managed file transfer application. If small enough, this may remain undetected for some time.
- Poor resource allocation. If you do not know that you are hitting the edges of your capacity, you do not have the opportunity to improve throughput and performance.
- Remaining reactive and always chasing your tail. Being proactive not only means maintaining operational health. It also means better use of your time – and your team’s – by ending the cycle of firefighting.
The observability of file transfer outcomes
File transfer applications are typically better at this than the previous example. Most, if not all, allow you to trace files transferred by it and review the outcome and various other metrics associated with it.
Where they differ are the quality of reporting and whether they can provide end-to-end reporting.
Therefore, the ability to be able to track files end-to-end – and not in isolation – can be of critical importance.
Risks of not being able to do so include:
- Having to track files across multiple systems, using multiple administration consoles, and having to stitch timelines together is hugely time consuming and may not even be possible. The subsequent effect could be that files are retransmitted despite them having been received/sent successfully, causing considerable onward problems.
- Not being able to provide basic high-level stats such as transfer times and speeds means not being able to measure success against a baseline and having no real understanding of what good looks like.
- Potentially falling foul of auditors, security teams and compliance requirements. Many of the international data security regulations, industry standards, and some elective standards require the ability to track data throughout its lifecycle.
See also: Why MFT matters for enterprise compliance and risk reduction
Managing problems as they arise
Sometimes, things just go wrong – whether it be a file arriving late due to an earlier delay, or an MFT application having been misconfigured post-upgrade to be in a poor security state.
The optimal response would be an automated one, which can either warn of the problem or, better yet, correct it for us. The ability to perform automated corrections is not a new concept; file transfer monitoring software, configuration integrity monitoring solutions, and SOAR (security orchestration, automation, and response) have been around for some time. However, the rise of agentic AI and its ability to automate actions has brought this concept back into sharp focus.
Rather than having a file transfer system limp along for the time it takes for the problem to be identified and corrected, why not correct it within the boundaries of acceptable changes?
Take, for example, a scenario where a file transfer workflow awaits two files from two different sources and then combines them before being sent onto the destination. Should one arrive but not the other, in traditional MFT, the workflow would fail, and nothing would be sent to the destination.
Where there is a corrective capability, that merge action may wait for longer, or it may notify or reach out to the source to query it for the status of the file.
Lack of operational management capabilities in MFT can present the following risks:
- Rigid systems require high degrees of human management and input. This has a direct impact on labor costs and reduces trust in the file transfer application.
- Response to problems is delayed due to corrective actions being manual and requiring human-led triage before enacting them. This pushes businesses away from the productivity frontier.
- Can impact relations with third parties. The ability to transact with you in a seamless manner is prized and can be the difference between you and a competitor. Poor systems can drive further cooperation away and impact your reputation.
Putting the “managed” back into managed file transfer
For many customers of MFT applications, the level of operational reliance is such that the success of the business scales with the success of managed file transfer. This might seem monolithic, and maybe it is, but the movement of files and data through a business is often paramount.
Blind reliance on these systems is scary. While MFT applications do provide a comfort blanket of controls in managing file transfers, they do not provide the comprehensive controls to drive improvement. They do not help our automotive manufacturers get from 6.5 million vehicles to more – or to get to the same number faster.
File transfer has taken us on a journey over the past decades of efficiency, automation, and interconnectedness. What we need now is for the “managed” to catch up, to drive us forward.
Frequently Asked Questions
What is visibility in managed file transfer?
Visibility relates to the ability to monitor system health. This could relate to the health of the managed file transfer application – such as service/daemon status, queues, resource usage, and health of dependencies.
Most MFT applications offer limited visibility options, and often none when it comes to dependencies. For some customers, the use of a wider network monitoring solution is sometimes warranted.
What does observability mean in the context of MFT?
When we talk about observability, we are going beyond basic monitoring and attempting to discover why something is happening. In the context of MFT, this means looking at end-to-end file transfers, their outcomes, and attempting to explain why something has occurred – particularly when it has failed.
It is especially important in MFT that files can be observed end to end as they may pass through multiple systems, nodes, and points of processing.
Why is it risky to operate a managed file transfer system without full visibility and observability?
Managed file transfer systems are typically employed in operationally reliant environments, where the optimal function of the system is paramount.
While, if configured correctly, the solution will operate correctly, it may not operate optimally – and where problems develop, the time to detect and correct will be extended to the detriment of the business.
How can I improve observability in my MFT environment?
Some MFT solutions have built-in or add-on capabilities to improve the end-to-end tracking of files and overall observability capabilities.
This could be via dashboards, reporting, or proactive alerting via modern channels such as Microsoft Teams or even Apache Kafka.
Discover a modern approach to MFT with Axway’s Zero Trust security model.