Pass multimodal input to Gemini for data warehouse model creation.

Data warehouses and lakes often have complex hierarchical data structures sourced from diverse origins, making data modeling challenging and error-prone. To swiftly adapt and create flexible, modular data models that meet evolving business needs without excessive rework, advanced technologies like generative AI are essential.

Multimodal large language models (LLMs) are particularly promising in this regard. They can analyze various data examples from text descriptions, code, to images within the data lake, understanding relationships and suggesting or automatically generating schema layouts. This simplifies the laborious task of implementing data models in databases, allowing developers to focus on higher-value data management tasks.

In this blog, we explore using multimodal LLMs in BigQuery to create database schemas. Using real-world examples of entity relationship (ER) diagrams and data definition languages (DDLs), we demonstrate a three-step process to build a schema.

We’ll illustrate this with Data Beans, a fictional technology company leveraging BigQuery’s integration with Vertex AI for analyzing unstructured data and integrating it with structured data. This setup enables Data Beans to streamline data modeling and gain valuable insights.

STEP: Create an entity relationship diagram

The first step is to create an ER diagram using your favorite modeling tool, or to take a screenshot of an existing ER diagram. The ER diagram can contain primary key and foreign key relationships, and will then be used as an input to the Gemini Vision Pro 1.0 model to create relevant BigQuery DDLs.BigQuery

STEP2: Create a prompt with the ER image as input

Next, to create the DDL statements in BigQuery, write a prompt to take an ER image as an input. The prompt should include detailed and relevant rules that the Gemini model should follow. In addition, make sure the prompt captures learnings from the previous iterations — in other words, be sure to update your prompt as you experiment and iterate it. These can be provided as examples to the model, for example a valid schema description for BigQuery. Providing a working example for the model to follow will help the model create a data definition DDL that follows your desired rules.

Now you have an image of an ER diagram to present to your LLM.

## Prompt to guide the model
llm_erd_prompt=f”””Use BigQuery SQL commands to create the following:
– Create a new BigQuery schema named “{dataset_id}”.
– Use only BigQuery data types. Double and triple check this since it causes a lot of errors.
– Create the BigQuery DDLs for the attached ERD.
– Create primary keys for each table using the ALTER command. Use the “NOT ENFORCED” keyword.
– Create foreign keys for each table using the ALTER command. Use the “NOT ENFORCED” keyword.
– For each field add an OPTIONS for the description.
– Cluster the table by the primary key.
– For columns that can be null do not add “NULL” to the created SQL statement. BigQuery leaves this blank.
– All ALTER TABLE statements should be at the bottom of the generated script.
– The ALTER TABLE statements should be ordered by the primary key statements and then the foreign key statements. Order matters!
– Double check your work especially that you used ONLY BigQuery data types.Previous Errors that have been generated by this script. Be sure to check your work to avoid encountering these.
– Query error: Type not found: FLOAT at [6:12]
– Query error: Table test.company does not have Primary Key constraints at [25:1]## Example for model to influence from
Example:
CREATE TABLE IF NOT EXISTS `{project_id}.{dataset_id}.customer`
(
customer_id INTEGER NOT NULL OPTIONS(description=”Primary key. Customer table.”),
country_id INTEGER NOT NULL OPTIONS(description=”Foreign key: Country table.”),
customer_llm_summary STRING NOT NULL OPTIONS(description=”LLM generated summary of customer data.”),
customer_lifetime_value STRING NOT NULL OPTIONS(description=”Total sales for this customer.”),
customer_cluster_id FLOAT NOT NULL OPTIONS(description=”Clustering algorithm id.”),
customer_review_llm_summary STRING OPTIONS(description=”LLM summary are all of the customer reviews.”),
customer_survey_llm_summary STRING OPTIONS(description=”LLM summary are all of the customer surveys.”)
)
CLUSTER BY customer_id;CREATE TABLE IF NOT EXISTS `{project_id}.{dataset_id}.country`
(
country_id INTEGER NOT NULL OPTIONS(description=”Primary key. Country table.”),
country_name STRING NOT NULL OPTIONS(description=”The name of the country.”)
)
CLUSTER BY country_id;ALTER TABLE `{project_id}.{dataset_id}.customer` ADD PRIMARY KEY (customer_id) NOT ENFORCED;
ALTER TABLE `{project_id}.{dataset_id}.country` ADD PRIMARY KEY (country_id) NOT ENFORCED;ALTER TABLE `{project_id}.{dataset_id}.customer` ADD FOREIGN KEY (country_id) REFERENCES `{project_id}.{dataset_id}.country`(country_id) NOT ENFORCED;
“””

 

STEP 3: Call the Gemini Pro 1.0 Vision model

After creating a prompt in Step 2, you are now ready to call the Gemini Pro 1.0 Vision model to generate the output by using the image of your ER diagram as an input (left side of Figure 1). You can do this in a number of ways — either directly from Colab notebooks using Python, or through BigQuery ML, leveraging its integration with Vertex AI:

imageBase64 = convert_png_to_base64(menu_erd_filename)llm_response = GeminiProVisionLLM(llm_erd_prompt, imageBase64, temperature=.2, topP=1, topK=32)

Conclusions and resources

In this demo, we showcased how the multimodal Gemini model streamlines data and schema creation. Writing prompts manually is feasible, but daunting at enterprise scale for thousands of assets like DDLs. By leveraging this approach, you can parameterize and automate prompt generation, significantly accelerating workflows and ensuring consistency across generated artifacts. Access the full Colab Enterprise notebook source code here.

BigQuery ML includes many new features to let you use Gemini Pro capabilities; for more, please see the documentation. Then, check out this tutorial to learn how to apply Google’s models to your data, deploy models, and operationalize ML workflows — all without ever moving your data from BigQuery. Finally, for a behind-the-scenes look on how we made this demo, watch this video on how to build an end-to-end data analytics and AI application using advanced models like Gemini directly from BigQuery.

This article is translated and adapted from the Google Cloud official blog. We hope everyone gains insights from this Microfusion Technology article on how to efficiently perform data modeling and schema generation using multi-modal LLMs! We look forward to continually bringing you the most popular and cutting-edge topics.