In this video I will show how to generate and
use graph embeddings with feature store.
Before you will continue reading please watch short introduction:
Graphs are structures, which contain sets of entity nodes and edges,
which represent the interaction between them.
Such data structures, can be used in many areas like social networks,
web data, or even molecular biology, for modeling real-life interactions.
To use properties contained in the graphs, in the machine learning algorithms,
we need to map them, to more accessible representations, called embeddings.
In contrast to the graphs, the embeddings are structures, representing the nodes features,
and can be easily used, as an input of the machine learning algorithms.
Because graphs are frequently represented by the large datasets,
embeddings calculation can be challenging. To solve this problem,
I will use a very efficient open source project,
Cleora which is entirely written in rust.
Let’s follow the Cleora algorithm. In the first step we need to determine
the number of features which will determine the embedding dimensionality.
Then we initialize the embeddings matrix. In the next step based on
the input data we calculate the random walk transition matrix.
The matrix describes the relations between nodes and is defined
as a ratio of number of edges running from first to second node,
and the degree of the first node.
The training phase is iterative multiplication of the embeddings matrix
and the transition matrix followed by L2 normalization of the embeddings rows.
Finally we get embedding matrix for the defined number of iterations.
Moreover, to be able to simply build a solution, I have extended the project,
with possibility of reading and writing to S3 store, and Apache Parquet format usage,
which significantly reduce embedding size.
Additionally, I have wrapped the rust code, with the python bindings,
thus we can simply install it and use it as a python package.
Based on the Cleora example,
I will use the Facebook dataset from
SNAP,
to calculate embeddings from page to page graph, and train a machine learning model,
which classifies page category.
In the first step, we need to prepare the input file, in the appropriate click,
or star expansion format.
# based on: https://github.com/Synerise/cleora/blob/master/example_classification.ipynb
importpandasaspdimports3fsimportnumpyasnpimportrandomfromsklearn.model_selectionimporttrain_test_splitrandom.seed(0)np.random.seed(0)df_cleora=pd.read_csv("./facebook_large/musae_facebook_edges.csv")train_cleora,test_cleora=train_test_split(df_cleora,test_size=0.2)fb_cleora_input_clique_filename="s3://input/fb_cleora_input_clique.txt"fb_cleora_input_star_filename="s3://input/fb_cleora_input_star.txt"fs=s3fs.S3FileSystem(client_kwargs={'endpoint_url':"http://minio:9000"})withfs.open(fb_cleora_input_clique_filename,"w")asf_cleora_clique,fs.open(fb_cleora_input_star_filename,"w")asf_cleora_star:grouped_train=train_cleora.groupby('id_1')forn,(name,group)inenumerate(grouped_train):group_list=group['id_2'].tolist()group_elems=list(map(str,group_list))f_cleora_clique.write("{} {}\n".format(name,' '.join(group_elems)))f_cleora_star.write("{}\t{}\n".format(n,name))forelemingroup_elems:f_cleora_star.write("{}\t{}\n".format(n,elem))
Then, we use Cleora python bindings,
to calculate embeddings, and write them as a parquet file in the s3 minio store.
For each node, I have added an additional column datetime which represents timestamp,
and will help to check how calculated embeddings, will change over time.
Additionaly every embeddings recalculation will be saved as
a separate parquet file eg. emb__CliqueNode__CliqueNode_20220910T204145.parquet.
Thus we will be able to follow embeddings history.
Now, we are ready to consume the calculated embeddings,
with Feast feature store, and Yummy extension.
In this video I will show how to use Apache Iceberg as a store for historical feature store.
Moreover we will build end to end real-time ingestion example with:
Postgres
Kafka connect
Iceberg on Minio
Feast with Yummy extension
Before you will continue reading please watch short introduction:
Apache Iceberg, is an high-performance table format, which can be used for huge analytic datasets.
Iceberg offers several features like: schema evolution, partition evolution and hidden partitioning,
and many more, which can be used to effectively process, petabytes of data.
Read more
if you want to learn more about Iceberg features and how it compares to the other lake formats (Delta Lake and Hudi).
Apache Iceberg, is perfect candidate to use as an historical store thus
I have decided to integrate it, with the Feast feature store through,
Yummy extension.
To show how to use it I will describe end to end solution with
the real-time Iceberg ingestion from the other data sources.
To do this, I will use Kafka connect, with Apache Iceberg Sink
This can be used, to build Iceberg lake on on-premise s3 store,
or to move your data and build a feature store in the cloud.
You can follow the solution in the notebook: example.ipynb
and simply reproduce using docker.
Suppose, we have our transactional system based on the postgres database,
where we keep current clients features.
We will track features changes, to build historical feature store.
The Kafka Connect, will use debezium postgres connector,
to track every data change and put it to the Iceberg using Iceberg sink.
We will store iceberg tables, on the minio s3 store,
but of course you can use AWS S3.
Kafka Connect, is based on Kafka, thus we will also need a Kafka instance and zookeeper.
We will setup selected components using docker.
To start clone the repository:
git clone https://github.com/yummyml/yummy-iceberg-kafka-connect.git
cd yummy-iceberg-kafka-connect
Then run
./run.postgres.sh
docker run -it--name postgres --rm--network=app_default \-ePOSTGRES_PASSWORD=postgres \-p 5432:5432 postgres:12.11 -cwal_level=logical
./run.zookeeper.sh
docker run -it--rm--name zookeeper --network app_default \-eZOOKEEPER_CLIENT_PORT=2181 -eZOOKEEPER_TICK_TIME=2000 \
confluentinc/cp-zookeeper:7.2.0
All below commands are already in the example.ipynb
notebook but I will explain all of them.
Kafka Connect, will publish database changes to the kafka, thus we also need to create appropriate topics,
if we don’t have topics auto-creation enabled.
I have created two topics because we will track the two postgress tables.
Now, we can setup a postgres connector, and Iceberg sink through, Kafka connect api.
In the postgres connector, we need to specify a list of tables, which we want to track.
Currently, you can use Iceberg, only with the spark backend.
You can also, add additional spark configuration, such as catalog configuration or
s3 store configuration.
In the next step, you have to add Iceberg Data Source.
In the feature store definition, you specify a path to the iceberg table or table name, which you want to consume on filesystem or s3 store respectively.
Of course, you can combine the Iceberg data source, with the other data sources like parquets, csv files or even delta lake if needed.
Here you see how to do this.
Now, we are ready to apply feature store definition, and fetch historical features.
In this article I’d like to show how to predict video matte using machine learning model.
Before you will continue reading please watch short introduction:
In the previous article I have shown how to cut the background from the image:
AI Scissors – sharp cut with neural networks.
This time we will generate matte for video without green box using machine learning model.
Video matting, is a technique which helps to separate video into two or more layers, for example foreground and background.
Using this method, we generate alpha matte, which determine the boundaries between the layers,
and allows for example to substitute the background.
Nowadays these methods, are widely used in video conference software, and probably you know it very well.
But is it possible, to process 4K video and generate a high resolution alpha matte, without green screen props ?
Following the article: arxiv 2108.11515 we can achieve this using:
“The Robust High-Resolution Video Matting with Temporal Guidance method”.
The authors, have used recurrent architecture to exploit temporal information. Thus the model predictions,
are more coherent and this improves matting robustness.
Moreover, their proposed new training strategy, where they use both matting (VideoMatte240K, Distinctions-646, Adobe Image Matting)
and segmentation datasets (YouTubeVIS, COCO).
This mixture helps to achieve better quality, for complex datasets and prevents overfitting.
Neural network architecture, consists of three elements.
The first element is Feature-Extraction Encoder, which extracts individual frames features, especially accurately locating human subjects. The encoder, is based on the MobileNetV3-Large backbone.
The second element is Recurrent Decoder, that aggregates temporal information. Recurrent approach helps to learn, what information to keep and forget by itself, on a continuous stream of video.
And Finally Deep Guided Filter module for high-resolution upsampling.
Because the authors shared their work and models, I have prepared an easy to use docker based application which we can use to simply process your video.
To run it you will need docker and you can run it with GPU or without GPU card.
With GPU:
docker run -it--gpus all -p 8000:8000 --rm--name aimatting qooba/aimatting:robust
Without GPU:
docker run -it-p 8000:8000 --rm--name aimatting qooba/aimatting:robust
Then open address http://localhost:8000/ in your browser.
Because the model does not require any auxiliary inputs such as a trimap or a pre-captured background image we simply upload our video and choose required the background. Currently we can generate green screen background which can be then replaced in the video editing software.
We can also use predefined color, image or even video.
I have also prepared the app for the older algorithm version:
arxiv 2012.07810
To use please run:
docker run -it--gpus all -p 8000:8000 --rm--name aimatting qooba/aimatting:background
This version additionally requires the background image but sometimes achieves better results.
In this article I’d like to present a really delicious Feast extension Yummy.
Before you will continue reading please watch short introduction:
Last time I showed the Feast integration with the Dask
framework which helps to distribute ML solutions across the cluster
but doesn’t solve other problems.
Currently in Feast we have a warehouse based approach where Feast builds
and executes query appropriate for specific database engines.
Because of this architecture Feast can’t use multiple data sources
at the same time.
Moreover the logic which fetch historical features from offline data sources
is duplicated for every datasource implementation which makes it difficult to
maintain.
To solve this problems I have decided to create
Yummy
Feast extension, which is also published
as a pypi package.
In Yummy I have used a backend based approach which centralizes the
logic which fetches historical data from offline stores.
Currently: Spark, Dask,
Ray and Polars
backends are supported.
Moreover because the selected backend is responsible for joining the data we can use
multiple different data sources at the same time.
Additionally with Yummy we can start using a feature store on a single machine and then
distribute it using the selected cluster type.
We can also use ready to use platforms like: Databricks,
Coiled, Anyscale to scale our solution.
To use Yummy we have to install it:
pip install yummy
Then we have to prepare Feast configuration feature_store.yaml:
In this case we will use s3 as a feature store registry and redis as an online store.
The Yummy takes offline store responsibility and in this case we have selected
dask backend.
For dask, ray and polars backends we don’t have to set up the cluster to
work. In this case if we don’t provide cluster configuration they will run
locally. For Apache Spark additional configuration is required for local machines.
In the next step we need to provide feature store definition in the python file eg.
features.py
In this article I will show how we combine Feast and Dask library to create distributed feature store.
Before you will continue reading please watch short introduction:
The Feature Store is very important component of the MLops process which helps to manage historical and online features. With the Feast we can for example read historical features from the parquet files and then materialize them to the Redis as a online store.
But what to do if historical data size exceeds our machine capabilities ? The Dask library can help to solve this problem. Using Dask we can distribute the data and calculations across multiple machines. The Dask can be run on the single machine or on the cluster (k8s, yarn, cloud, HPC, SSH, manual setup). We can start with the single machine and then smoothly pass to the cluster if needed. Moreover thanks to the Dask we can read bunch of parquets using path pattern and evaluate distributed training using libraries like scikit-learn or XGBoost
I have prepared ready to use docker image thus you can simply reproduce all steps.
docker run --name feast -d--rm-p 8888:8888 -p 8787:8787 qooba/feast:dask
Then check the Jupyter notebook token which you will need to login:
But with the docker you will have the whole environment ready.
In the notebook you will can find all the steps:
Random data generation
I have used numpy and scikit-learn to generate 1M entities end historical data (10 features generated with make_hastie_10_2 function) for 14 days which I save as a parquet file (1.34GB).
Feast configuration and registry
feature_store.yaml - where I use local registry and Sqlite database as a online store.
features.py - with one file source (generate parquet) and features definition.
The create the Feast registry we have to run:
feast apply
Additionally I have created simple library which helps to inspect feast schema directly in the Jupyter notebook
Using Dask dataframe we can continue distributed training with the distributed data.
On the other hand if we will use Pandas dataframe the data will be computed to the one node.
To start distributed training with scikit-learn we can use Joblib library with the dask backend:
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.Ok