This is the second article about the cpt-model, where I will show you some of the possible applications of the cpt-model. If you haven't read the first part yet you can find it here.

All the applications described below have proven to be very helpful in multiple CRUX projects.

Lithological profiles *

Visualization is key. As explained in the previous article we often have to deal with a lot of data, such as many cpts. Classifying them and storing that information is an important and necessary part of the process but I believe that as humans we need to see things and translate numbers into images and colors.

For this reason we developed a Python library that generates interactive lithological profiles. These profiles help the engineer to get an idea of the stratigraphy of a certain area and its variability and therefore make better decisions.

The Python library is called gtlp_core and can be used in combination with the cpt-model to make beautiful plots.

Let's see how you could use this library.

We start by importing the libraries that are needed:

import gtlp_core as gtlp
from nuclei import call_endpoint
from pygef import ParseGEF
import os

Pygef is an open source parser of gef files and you can find it here.

Let's say that your gef files are stored in a folder called cpts, we can use pygef to parse them and store the parsed objects in a list.

list_gefs = []
path_folder = "./cpts/"
for gef_file in os.listdir(path_folder):
    list_gefs.append(ParseGEF(path_folder + gef_file))

We can then use the methods of the gtlp_tool to extract the metadata from the cpts and order them spatially based on a start point and end point. We will obtain a pandas.DataFrame of ordered cpts.

df_gef_files = gtlp.extract_gef_meta_data(list_gefs)
start_point = (79978.87, 424882.02)
end_point = (79487.05, 424832.42)
df_ordered = gtlp.order_gefs_by_xy(start_point, df_gef_files, end_point, method="tsp")
df_ordered.head()

Now, we can use the cpt-model to classify those cpts, to do so we can write a function that returns a dictionary that has as keys the test_id of the cpts and as values the layer_table.
We first import the libraries that are needed and define the colors associated to the soil types.

from pygef import grouping
import pandas as pd
from nuclei import call_endpoint

color_NEN = {"grind, zwak siltig": "#AEA9A9",
             'grind, sterk siltig': "#ABAB7D",
             "zand, zwak siltig, kleiig":"#DEDF70",
             "zand, sterk siltig, kleiig":"#C8DB65", 
             "zand, schoon": "#FFF50A",
             "klei, schoon": "#429458",
             "klei, sterk zandig": "#74987E",
             "klei, zwak zandig": "#74987E",
             "klei, organisch": "#3A5844",
             "veen, niet voorbelast": "#74500C",
             "leem, sterk zandig": "#114B8D",
             "leem, zwak zandig": "#7DA9DB"
             }

Then, we write the function:

def create_dictionary_layer_tables(list_gefs, penalty=3):
    dict_layer_tables = {}
    for gef in list_gefs:
        schema = {"aggregate_layers_penalty": penalty,
                  "aggregation_loss": "l1",
                  "cpt_content": gef.s,
                  "include_features": True,
                  "include_location": True,
                  "n_clusters": 3,
                  "merge_nen_table": True,
                  "interpolate_nen_table_values": True
                  }
        result = call_endpoint('gef-model', '/classify', schema)
        df_layers = result["layer_table"]
        df_layers = df_layers.rename(columns={"nen_row": "layer",                    "depth_top": "z_in"})
        df_layers = df_layers.assign(zf=df_layers["z_in"] + df_layers["thickness"])
        df_layers = df_layers.assign(gef_name=[str(gef.test_id)] * df_layers.shape[0])
        df_layers = (df_layers
            .pipe(grouping.calculate_z_centr)
            .pipe(grouping.calculate_z_in_NAP, gef.zid)
            .pipe(grouping.calculate_zf_NAP, gef.zid)
            )
        df_layers = df_layers.assign(colours=[color_NEN[soil] for soil in df_layers["layer"]])
        dict_layer_tables[str(gef.test_id)] = df_layers
    return dict_layer_tables

Lastly, we can use this function to generate the input needed for the plot. The result will be a html file that contains an interactive plot.

penalty = 3
dict_layer_tables =create_dictionary_layer_tables(list_gefs, penalty=penalty)
gtlp_df = gtlp.create_database_gtlp(dict_layer_tables, df_ordered)
gtlp.plot_gtlp(gtlp_df, df_ordered, plot_cpt=True)

The plots can be generated with or without cpt logs by setting the argument plot_cpt to True/False In the version with cpt logs the customizable scale and grid guarantee an optimal readability of the data. On the other hand the plot without cpt logs has the advantage that the real distances are maintained.

Click on the images for an interactive plot:

Summing up, an interactive lithological profile can be created by using a few lines of code and the following libraries:
pygef: to parse GEF files
cpt-model: to transform cpt-data into lithology
gtlp_core: to create a lithological profile

Dikes and roads embankments calculations

The cpt-model proved to be very helpful also in the dikes and roads stability calculations. These projects usually involve a lot of soil investigation. The image below shows the distribution in space of the site investigation for a dike project. The red dots represent the cpts and the purple dots represent the boreholes.

Usually cpts are executed in multiple parts of the embankment: at the inner toe, at the crest and sometimes also at the outer toe and foreland, this is shown in the figure below.

We can use the cpt-model to automatically classify all the available cpts and combine the obtained stratigraphy with the surface lines. The combination can be done in Python by using the geometry library Shapely. We will obtain something like the figure below.

We can then add water levels, state conditions, traffic loads, search grid and tangent lines and use the geolib to automatically generate the file and calculate the safety factor.

As a proof of concept we recently generated cross sections at intervals of 20 m along a dike and we calculated them all. By making prints of all the cross-sections we could quickly check if the schematisations were correct (the power of visualization!) and refine the code were necessary.
The code that is used to automatically generate the 2-d sections and calculate the stability factor is a bit too long to fit in a blog post :). So if you want to know more about it get in contact with us and send an email to aW5mb0BjZW1zYnYubmw=.
I hope that this article gave you some insights and inspiration and that you have a better idea of the potential of the cpt-model, these were only two of the possible applications but sky is the limit ;)

We are currently working on more applications that we offer via an API service so make sure to follow us on LinkedIn to stay up-do-date!

*Disclaimer:
This blog relates to the previous CPT Core version, called GEF-model. We've since updated the API, renamed the service, and introduced additional features. Explore the new API advancements for enhanced functionality.

Site by Alsjeblaft!