Since founding OTA Insight in 2012, they have provided user-friendly revenue management tools, the global leader in cloud-based hospitality business intelligence.
Since then, OTA Insight has won many industry awards and was named the UK’s 17th fastest-growing private technology company in The Sunday Times Fast Track Hiscox #TechTrack100. OTA Insight became the preferred revenue management solution for over 60,000 independent, local and global chain properties in more than 185 countries, supported by 450 stellar employees.
Recently, OTA Insight changed its name to Lighthouse. To more accurately reflect the solutions they deliver through their commercial platform.
Lighthouse collects data from different sources, including online travel agencies (OTAs), hotel websites, and booking engines. They use the data to track hotel performance, identify trends, and make strategic decisions, they provide to their customers.
Recently, Lighthouse became interested in using Generative AI to summarise data from hotels. This allows them to provide hoteliers with a brief overview of what is happening in their business. Today, Revenue Managers write those reports manually, taking them a fair amount of time to:
- Analyse past data;
- Understand previous trends;
- Predict possible future trends;
- Provide meaningful insights on actions to take.
For this proof of concept, they used Generative AI technologies to tackle the first two above points, i.e. providing quick insights on what has happened in the past. The last two points are left for Revenue Managers, i.e. providing suggestions on what may happen in the future.
This solution is a technological way to speed up the reporting process for Revenue Managers. The Generative AI solution uses available past data about the hotel and provides a textual interpretation of what happened during this period.
The solution & methodology
The emails Revenue Managers write have a more or less fixed structure, so it was decided to split the problem into multiple sub-problems to vary the techniques applied to solve them:
- Generate insights out of potential big events happening near a hotel,
- Generate insights out of a specific segment (part of the market) that had an impact on the hotel’s revenue,
- Generate insights out of information about the two weeks ahead,
- Generate insights out of information about the day before the data extraction and compare with the same time last year.
Problem 1 & 2: Generate event related & segment related summaries
For the first two wub-problems, prompt engineering was sufficient as the structure of the outcome was simple, using GenAI to formulate the sentences naturally.
Problem 3: Generate insights for two weeks ahead
For this part of the problem, simple prompt engineering was also used, but not sufficient to extract high-quality summaries; the data was more complex to analyse. Hence, a few shots prompting has been integrated to provide some examples of input-output relationships to the LLM. In addition, to avoid including irrelevant examples, each example was mapped to an embedding with the text embedding-gecko model from GenerativeAI and fed into a Vector Database (ChromaDB). At this point, it was a matter of encoding the current input data, performing a similarity search on the database, and extracting the 3 closest input data sets. Doing so drastically improved the quality of the text produced by the model.
Problem 4: Generate insights about the day before
Similarly to the previous sub-problem, only using prompt engineering proved to be problematic. The desired sentence’s focus changed depending on the available data, e.g., for the data about the day before, sometimes, the occupancy would be important to address, sometimes it would be revenue, etc. Hoping the model would pick up the unusual/most important topics to discuss was almost enough to provide good results, but turned out to produce quite generic summaries as if data was just placed into a template sentence.
As a first advanced attempt, a few shots of prompting were used. At this point, the problem was that the model would overfit the example, hence giving false values in the summaries.
Finally, fine-tuning the model on this specific part of the data allowed us to create a tailored model that was able to mimic the writing style of Revenue Managers while keeping the data accurate.
Making the API available
At this point, the creation of the API, which is hosted on Cloud Run, remains to be tackled. A simple POST request needs to be sent to this service, as shown in the diagram below:
The application code generates individual parts of the summary and concatenates the results to send back to the backend that initiated the call.
A very important part of this project was the feedback from Revenue Managers that was taken into account to refine the prompt engineering techniques. Periodically, we would provide examples of what the models could generate and get scores for each example with valuable information on what should remain or be improved.
As it is a Proof of Concept, the main goal was to help Lighthouse learn how to use Generative AI technologies and move forward with this use case in production, or potentially apply the lessons learned to other use cases. With this in mind, Lighthouse felt comfortable taking full ownership of the solution to keep refining it with the help of Revenue managers’ feedback and move to production when they deem the solution accurate enough.