As a developer working with Azure Document Intelligence, optimizing document processing is crucial to reduce processing time without compromising the quality of output. In this post, I will share how I managed to improve the performance of my text analytics code, significantly reducing the processing time from 10 seconds to just 3 seconds, with no impact on the output quality.
Original Code vs Optimized Code
Initially, the document processing took around 10 seconds, which was decent but could be improved for better scalability and faster execution. After optimization, the processing time was reduced to just 3 seconds by applying several techniques, all without affecting the quality of the results.
Original Processing Time
- Time taken to process: 10 seconds
Optimized Processing Time
- Time taken to process: 3 seconds
Steps Taken to Optimize the Code
Here are the key changes I made to optimize the document processing workflow:
1. Preprocessing the Text
Preprocessing the text before passing it to Azure's API is essential for cleaning and normalizing the input data. This helps remove unnecessary characters, stop words, and any noise that could slow down processing. A simple preprocessing function was added to clean the text before calling the Azure API. Additionally, preprocessing reduces the number of tokens sent to Azure's API, directly lowering the associated costs since Azure charges based on token usage.
def preprocess_text(text):
# Implement text cleaning: remove unnecessary characters, normalize text, etc.
cleaned_text = text.lower() # Example: convert to lowercase
cleaned_text = re.sub(r'[^\w\s]', '', cleaned_text) # Remove punctuation
return cleaned_text
2. Specifying the Language Parameter
Azure Text Analytics API automatically detects the language of the document, but specifying the language parameter in API calls can skip this detection step, thereby saving time.
For example, by specifying language="en"
when calling the API for recognizing PII entities, extracting key phrases, or recognizing named entities, we can directly process the text and skip language detection.
This reduces unnecessary overhead and speeds up processing, especially when dealing with a large number of documents in a specific language.
3. Batch Processing
Another performance optimization technique is to batch multiple documents together and process them in parallel. This reduces the overhead of making multiple individual API calls. By sending a batch of documents, Azure can process them in parallel, which leads to faster overall processing time.
# Example of sending multiple documents in one batchdocuments = ["Document 1 text", "Document 2 text", "Document 3 text"]batch_response = text_analytics_client.analyze_batch(documents)
4. Parallel API Calls
If you’re working with a large dataset, consider using parallel API calls for independent tasks. For example, you could recognize PII entities in one set of documents while extracting key phrases from another set. This parallel processing can be achieved using multi-threading or asynchronous calls.
Performance Gains
After applying these optimizations, the processing time dropped from 10 seconds to just 3 seconds per execution, which represents a 70% reduction in processing time. This performance boost is particularly valuable when dealing with large-scale document processing, where speed is critical.
Conclusion
Optimizing document processing with Azure Document Intelligence not only improves performance but also reduces operational costs. By incorporating preprocessing steps, specifying the language parameter, and utilizing batch and parallel processing, you can achieve significant performance improvements while maintaining output quality and minimizing costs by reducing token usage.
If you’re facing similar challenges, try out these optimizations and see how they work for your use case. I’d love to hear about any additional techniques you’ve used to speed up your document processing workflows while saving costs.