Introduction To Web Applications: Part 1

Negoiţă D. D. Felix
24 min readSep 28, 2021

Machine Learning, Architecture, Python, and Databases

Diving In and Introduction

It is hardly surprising that web applications have seen such an impressive development over the course of the last approximately ten years. If one were to make a synthesis of the overall experience of desktop applications, there are a couple of valid arguments that we can be almost certain would appear. First and foremost, a piece of desktop software has to be manually retrieved (downloaded from the Internet or physically) and installed, which can present issues to the “non-technical” user. Needless to say that this process can bring, and mostly has, subsequent issues in regards to updating and/or patching the software, system requirements, etc. Also, cross-platform development efforts are needed in order to provide versions for the three major operating systems (macOS, Windows, Linux) if the target is to reach as large of an audience as possible. Desktop applications used to be bound to the machine in terms of the licensing as well, which further reduced the flexibility in approaching one’s work. A further valid point has to do with the limited, often delayed, user feedback and how that can lead to the diminishing of testing scenarios. Of course no solution is constructed out of disadvantages alone and we are not dealing with such a case here, either: desktop applications tend to be faster and are generally considered more secure than their web counterparts. However, history has shown that while the web is not the perfect solution, its advantages were simply too powerful to ignore. Not only does a web application require no installation from the user, updates can be easily rolled out and made available to all users instantly after a new release. They are also cross-platform and accessible from mobile (A rather important caveat since, according to a statistical analysis made by Oberlo, 52% of all web traffic in 2020 has been from mobile devices. “What Percentage of Internet Traffic Is Mobile?” Oberlo. Web. Accessed August 19th 2020 (https://www.oberlo.com/statistics/mobile-internet-traffic).). Flexibility is increased as well since licensing tends to bound an account that is accessible from any machine with an internet browser. With the advent of Web 2.0, a plethora of frameworks and technologies became available that helped push web design into web development, with the more traditional paradigms of programming. One of the important perspective shifts that is worth pondering on is the appearance of SPAs (single page applications). Since performance is one of the most important aspects of user experience, the main feature of SPAs is that they remove the need for page reloads. In the previous paradigm of server-side rendering multiple page applications, every time the user wanted to navigate within the application, a request was being made to the server to send another HTML page, resulting in a reload. However, with SPAs, after the initial load, the client requests data from the server and uses AJAX (asynchronous JavaScript and XML) to modify the existing page, resulting in no reload and a faster experience. While this has become a very common architectural pattern, SPAs have to load a considerable amount of JavaScript on the client side to achieve all this, drawing some scrutiny, and necessitates extra care when it comes to security. Since it is clear by the time of writing this paper that SPAs represent no mere passing trend, but are here to stay, the present paper shall follow an the development of an application that uses an SPA as its client. Another crucial aspect brought about by Web 2.0 was the emphasis of user-generated content. Unsurprisingly, with the exception of presentation websites, very few web applications can think of themselves as not being data-driven. Whole new fields of data engineering and data science (As it is now understood as being apart from simply applied statistics.) have emerged, along with technologies and frameworks for distributed (cloud) computing, data warehousing, data visualization or real-time data analysis. For quite a few years it seemed that “big data” was the most desirable technology related noun to have. Indeed, one could argue that even at the time of writing this is still the case, however the focus has shifted slightly towards machine learning, somewhat naturally, since it represents what can be done with massive amounts of data. By “data-driven web application” in the context of this paper one ought to understand any application where the user interacts with structured or semi-structured data. The application itself can be thought of as a wrapper for serving and possibly also allowing the user to modify data. Social media platforms, note taking applications, trackers, blogs, streaming platforms, etc, can all be considered data driven: the core is the data and the application is an interface for it.

Having established that most contemporary applications tend to be data driven and that this all still happens in the context of “big data”, one of the most valuable aspects of such applications is their ability to leverage the advantages of machine learning working with their data in order to provide insights, forecasts, anomaly detection, recommendations and so on. The business value that any (or all) of these can bring is beyond the scope of this work, but it is needless to say that it can no longer be overlooked. It is with the purpose of illustrating this argument that the present paper will follow the engineering of a web application that integrates a machine learning component. Besides how accessible AI is becoming, hopefully this work provides satisfactory documentation into the development process of a modern SPA as well.

The Ubiquity of Machine Learning

Machine Learning (ML) is perhaps the most popular application of Artificial Intelligence in the world of “big data”. Although there are four main types of ML, which will be synthesized below, it is safe to loosely define it as mathematical models based on statistics that are used to analyze large volumes of data. And although it seems at first glance that that is not more special than Microsoft Excel, the caveat regarding machine learning algorithms is that they can be re-trained (essentially modified) whenever new data is made available to them, thus having them improve. In the traditional paradigm of computer programing, data is being fed into an algorithm/program, which the user writes, and transformed into output. With a great many ML use cases (though not all as we shall shortly see), including the one analyzed in this paper, data is being fed into the program, along with the output, which will compute the algorithm (model) describing how the data can be turned into said output:

Though it may be difficult to deduce from the extremely simplified example illustrated in fig. 1, the benefit of ML is that it can establish patterns and connections between an incredibly high number of data points.
In order to better understand machine learning’s value and uses cases, a short overview of its four sub-classifications is in order. Those are supervised, unsupervised, reinforcement and deep learning respectively.

Supervised Learning

Supervised learning is the simplest and most widely used form of machine learning and it is the one that best fits the idea presented in fig. 1: it tries to map inputs to outputs based on a set of examples (called ‘training data’ in the literature). During the training phase, the machine learning algorithm chosen will try to look for correlations and patterns in the input data, creating a model that can be thought of like a function in mathematics. Almost all texts on supervised learning define it by insisting that the data is ‘labeled’, id est, the algorithm is being told exactly what feature(s) we are interested in mapping future inputs to. Predicting and mapping future inputs is the entire purpose of supervised learning, which is why it is an umbrella term encompassing classification and regression.

Classification, as the name implies, refers to assigning a data point to a certain class/category. One of the most common and useful examples of this are spam filters, a binary classification problem, since, when a new email (data point) arrives, it can either be spam or not. Linear Classifiers, Random Forests, K-

Nearest Neighbor, Decision Trees, and Support Vector Machines are among the most popular classification algorithms.

Regression is very similar in concept, in that, by analyzing patterns and correlations between features, it results in a function that, instead of classifying new data points, helps predict values for certain features. For example, in the case of a linear regression dealing with two features, such as the case of fig. 3, when new a new data point comes in with a value for ‘Feature 2’, the model can make a predictions of its value for ‘Feature 1’.

Of course this is the exact same graph we would use to plot the linear mathematical function f(x) = mx + b, and it the same of a two feature regression, that would actually be the case. However, the complexity increases exponentially with the number of features.
Spam detection has already been mentioned as a possible application for classification, but supervised learning is also invaluable in the fields of speech recognition, object recognition for computer vision, and even bioinformatics.

Unsupervised Learning

In the case of unsupervised learning, the data is not categorized or labeled before being fed into the model, with the point being that the algorithm should define those itself. In supervised learning, as has been seen from above, the model is either being presented with examples (in the training data) of categories or with the features that need predicting. In the unsupervised case, the categories or features with relationships between them will be discovered by the model. It is, therefore, as one would image, much more computationally intensive, less precise, with less accuracy in results, and, although data scientists do not have to prepare the data in advance to such a degree, they do have to spend more time interpreting it. In spite of all these drawbacks, the benefits cannot be ignored, and they will become apparent when one takes a closer look at the types of problems tacked by this method, association and clustering.

What we mean by association is the finding of relationships between a massive amount of variables in a data set, identifying, for example, which of them often occur together. Clustering, however, represents the great majority of the applications of unsupervised learning. While analyzing the different types of clustering algorithms is beyond the scope of the present paper, it can be safe to say that this method’s goal is to find structure in a collection of uncategorized data:

Finding previously unknown patterns in the data (sometimes called “exploratory analysis”) as well as mining for associations have immensely valuable applications in retail as well as in the process of ‘dimensionality reduction’, through which irrelevant features are removed from the data set, effectively reducing the ‘noise’ in the data. Clustering is extremely helpful for anomaly detection, since it can tell whether new data points conform to previous patterns and is widely used in cyber- security and financial software.

Reinforcement Learning

Reinforcement learning works, not surprisingly, similar to what one would expect from the analogous term in psychology. The algorithm, called ‘agent’, takes steps (or actions) in an environment in order to maximize a reward. Hence, it learning what set of steps it has to take to reach a certain goal without being guided, saving the need to program extremely difficult heuristics. A simplified scheme of the processes is illustrated in Fig. 5 below:

Deep Learning

Technically, deep learning is not a machine learning technique separate from supervised or unsupervised learning, however, it is specific enough to merit separate attention. Deep learning is loosely modeled on the human brain, in that it works with layers of nodes we can refer to as ‘artificial neurons’ in structures known as ‘artificial neural networks’. After data is being passed from the input layer, it is processed at each of the hidden layers (see Fig. 6) subsequently, before the model arrives at the output layer. The consensus is that the term ‘deep’ used with respect to this technique describes a number of hidden layers larger than one.

Computation happens from one hidden layer to the next based ‘weights’ assigned to each neuron. The bias (neuron) along with its weight is inputed into an activation function which decides what neurons from the next layer the data will be passed to. This process is called forwards propagation. One of the most important features of these networks is that they are capable of backpropagation: after predictions in the output layer are tested against the real, expected outputs, the network will go backwards through the layers readjusting the weights of the neurons. It is not surprising to find out then that, when compared to many other machine learning algorithms, deep learning has a much better performance yield as the data increases. It can also do feature extraction and classification on its own and be combined with other ML techniques. Although some of the downsides include the need for massive amounts of data and computational power as well as the time it takes to train such models, deep learning has performed remarkably well in facial or object recognition, forecasting, autonomous driving, medical care, and even music composition.

Depending on the complexity of the problem, machine learning can be a daunting field, requiring highly trained data scientists. However, for a great many use cases, it is no longer prohibitive. It could even be said that it is starting to become a necessity, as terms such as “data driven strategy” continue to popularize. As discussed in the introduction, many (if not most) present day web applications are data-driven. Complex data lakes and data warehouses storing intricate logs and detailed histories about how, when, and for how long users interacted with the application and its data are becoming so commonplace that they gave birth to the field of data engineering. The metaphorical scene has been perfectly set up for developers to take advantage of machine learning. As shall be exemplified in later chapters, that is becoming increasingly easy to do with the advent of helpful libraries, solutions, and services. Far from needing advanced studies in statistical mathematics or an expert in data science, developers today can integrate ML into their application as simply as exposing a web API microservice and that is precisely how it will be done for the purpose of this paper. As can be seen from the exposure in this chapter, it is clear that ML does not lack possible applications and can greatly enrich the value offered to our users or businesses.

Architectural Overview

Our proposed application is called “Noesis” and its purpose is to serve as a portal where users can browse books, get information about them, engage in conversation, make and save notes, and get personalized recommendations.

You can find the repo here: https://github.com/felixnego/noesis-books

The following chapters will document the implementation of the below features as well as the technologies used:

  • Scraping publicly available book data using APIs exposed by two similar platforms
  • Seeding our database with said data and generating some placeholders as well
  • Exposing a machine learning recommendation web API
  • Creating custom validation in the backend
  • Transferring data efficiently with data transfer objects in the backend
  • Authenticating users with JWT
  • Creating different user roles that enable different features
  • Create, Read, Update, Delete (CRUD) operations on our book entities
  • Search that looks for results in different fields
  • Reports on what the top rated books are in the most popular categories
  • Comments and ratings for books (CRUD)
  • User personalized notes on books (CRUD)
  • User profile page where all notes appear as well as book recommendations based on previously given ratings
  • Pagination with infinite scrolling
  • Client side validation for forms
  • Notifications with AlertifyJS

Thus, the application will have a .NET web API backend, an Angular application for its client side, a Python web API for the ML microservice and a MySQL database acting as the data persistence layer (Fig. 7 is a representation of all these components).
When initially deployed, a number of Python scripts will run that seed data into the MySQL database. In this first iteration, the data for the initial seed can be split into previously downloaded CSV files, data scraped from two open APIs, belonging to similar web apps, and placeholder data generated mostly for the purpose of rating, which the recommendation engine heavily relies on. The latter will be exposed via a Flask application in Python. It trains a model based on the rating data available in the database and exposes two endpoints: the first will return a list of recommended books based on user ID and the second will re-train the model every time a new rating is added in the system. An ASP.NET Core web API will serve as the main backend and finally a frontend application built with Angular represents the client side, making HTTP requests to both the .NET and the Flask APIs.

As previously mentioned, each of the components presented in the above diagram will be analyzed in the upcoming chapter, beginning with the scraper and seeder. However, let us first turn to an introduction to Python, since it is used in two main places across our application.

Python and Its Versatility

Python is a high-level, dynamically typed and single-threaded, interpreted general-purpose programming language that has been soaring in popularity over the past years. Part of the reason for its growth is how easy it is to be learnt by beginners of computer science, with its clear and simple syntax and ability to obtain powerful results without diving into very many advanced language features.

When we describe a programming language as “high-level”, what we mean is that the developer need no concern themselves with the details of how the running the code works (such as the operating system, CPU, etc). The technical term is that the details of the computer/machine are “abstracted” away from the programmer. By contrast, in a low-level programming language such as assembly and machine language, the code is very close to processor instructions. There are also middle level programming languages, such as C, that fall somewhere in between.

A dynamically typed language is one in which we do not have to explicitly state in the code what data type each variable will have. Their type will be checked at runtime, rather than compile time, as is the case with typed programming languages. Although easier when first learning to program and helping make the syntax clearer, this can lead to quirky behavior, such as having arrays with different types of elements, that is sometimes considered unsafe, due to it being error-prone. Python is single-threaded in its most common implementation, CPython. The interpreter will execute Python code using a single thread. That is not to say, however, that multiple threads cannot be created and managed from Python’s standard library: they absolutely can, but can only achieve concurrence and not parallelism. With CPython, the Global Interpreter Lock will not allow two threads to run Python code at the same time (in order to achieve parallelism, multi-processing should be considered), however, threads can be used (or other libraries that abstract this even further, such as ‘asyncio’) to run code asynchronously, achieving concurrency.

An interpreted language is best understood alongside a compiled counterpart. The terms “runtime” and “compile time” have been mentioned above and they could be used to analyze the difference between languages. A compiled language will first go through compile time, when the code is being turned (compiled) into an executable (a group of machine language instructions). Then, said executable is being run (runtime). With an interpreted language, the code will be run directly, almost line by line, without being bundled into an executable first. By contrast with compiled languages, interpreted ones tend to be slightly slower.

Finally, Python is general-purpose, which means that it can be used to write applications for a wide variety of categories. It is extensively used in data science, data engineering, robotic process automation (RPA), machine learning, web applications with Flask and Django, DevOps and many others. It is a combination of this versatility and its manageable learning curve that presents Python’s greatest asset and ensures it keeps ranking very high in ‘top programming languages’ surveys.

The community behind Python is another aspect worthy of consideration. The online support as well as a highly impressive number of packages and APIs have a considerable positive impact on the entire development experience. Moreover, the community is actively involved in establishing guides and best practices, having come up with the concept of “pythonic way” of programming. As can be inferred from the code snippet example above, the “pythonic” approach involves following community standards that encourage the use of language features to their full potential and original intention.

Even though not necessarily used in the course of this paper, the larger snipper on the right presents some of the language features that contribute to developing in a “pythonic” way.

Some more advanced features of Python

List comprehensions (lines 3–4) provide a quick way to generate arrays populated by elements from other iterables (in Python, that means they implement the __iter__ protocol) by specifying an expression between square brackets.
The next lines illustrate the use of a lambda expression, or anonymous function, as it is called in other languages. While presenting some limitations, Python does support such functions as well as the option to store them in a variable18 and call them later, with a syntax that should be very familiar to JavaScript developers.

On line nine a context manager has been opened. Context managers are special classes in Python (which means, naturally, that users can create their own) that work with the “with” keyword to create an indented block. When the block is entered, one of the context manager’s special methods is automatically called and, likewise, another when the execution exits the block. This helps ensure that appropriate setup or cleanup logic is executed and the developers do not forget about it. Relevant examples include working with files, such as our examples, where one needs to make sure the file is closed afterwards or handling database operations, where connections needs to be opened and closed.

From line 13 to 24, an example of functional decorators can be seen. A decorator helps extend the functionality of an existing function without modifying it. It is a higher-order function action as a wrapper. When the function “computation” is decorated with the special “@” symbol at line 22, it is equivalent to saying “computation = validate_positive(computation)”. That is to say, a decorator will return a function of the same name and signature that is called with its own name, not the decorator’s, but with a shorter, more elegant syntax. Common use casses include performing validations or re-runing functions.

Last in our list, but not least, is an example of a generator. At first sight, a generator is a function that uses “yield” instead of “return”. “Return” has to be the last statement in a function definition becauses it causes the execution to exit its block. However that is not the case with “yield”. With the latter, since it does not stop the execution, logic can be added after it as well. The function will compute results in sequence, one at a time, saving its state, but the sequence is lazily evaluated. That is the reason the example includes the “for” loop on line 32 and the two “print” statements on lines 35 and 36: the loop will for the evaluation of the sequence, while, in the “print” statements the special “next()” function has been used to iterate over the generator object returned at line 34. Besides some a more subtle approach to OOP (Object Oriented Programming) and possible difficulties with asynchronous programming or multi-threaded, Python is undoubtedly a powerful and versitile tool worth investigating. Let us continue our analysis with how Python was employed to scrape and seed data for our application.

I have a deeper dive into Python’s more advanced features here: https://negoiddfelix.medium.com/python-from-intermediate-to-superhero-1a86e518bb77

Scraping and Seeding

Under bin/seed-data, there are five main files of interest for this process. The first is books_reduces.csv, a file containing book related data from Amazon, which was downloaded previously from an online source. Its structure looks like this:

Sample proposed data model

The second file is a local_config.json that is housing the MySQL database credentials as well as an API key for Goodreads. Now, it is important to add this file to .gitignore so that it does not get pushed on the repository, making the credentials visible to people who should not have access. Needless to say that this approach (of storing credentials in a file) is not be used when the application is deployed and should be contained on the local environment and for development purposes only. When deployed, a viable alternative would be to keep everything we need from this file in separate environment variables.

An utils.py has been created as well, with the purpose of providing helper functions for import, thus keeping the size of our script files to a manageable number of lines. In our case it contains some sanitizers, a function to read the config JSON mentioned above, two functions that query the database and return IDs and a SQLSeeder class that will act as a handler for our database connection. This class was defined as a context manager. Just as discussed in the previous chapter, it implements the two special methods: __enter__() and __exit__().

All we do in its constructor is pass in a configuration object (dictionary, in the case of Python) that holds the necessary credentials. When we enter the context manager, on the line where the “with” keyword will be used, we want the connection to the database to be opened and then to return the cursor object, with the help of which we will run queries. In __exit__(), which will be called when the “with” block is finished, we make sure that we commit our changes and close the connection. Seed.py represents our most important part of the seeding process, which has been encapsulated into four main functions that have to be called in the specific order seen on the left.

The first function, insert_author_data() uses the Pandas library to read the entire author column into a dataframe from the .csv file and insert it into the database. A dataframe is a special Pandas object built with NumPy, a special Python library that was written in C and can help make some operations faster. It also provides very powerful methods for easily manipulating data and is an indispensable tool for any data scientist. Therefore, handling data with Pandas in this way tends to be preferable to directly manipulating the .csv file with Python’s built-in “csv” module. However, the next function, seed_book_data(), does exactly the latter, both to provide an example for the difference and to exercise more control over how each row is processed. In our case that means sanitizing individual cells, switching variables around in certain cases (ergo, performing checks) or skipping badly-formed rows alltogether in the “except” blocks.

The last two functions are slightly different in that they generate placeholder (“fake”) data with the help of another third party library, Faker. A hundred sample users are being created, who then give random ratings to each book inserted at the previous steps.

Last but not least, the directory also contains the script seed_external_data.py that both scrapes and seeds into the database. It scrapes from two separate APIs. The first is OpenLibrary, where the script searches books by ISBN and, if available, retrieves the ID with which they can be found on Goodreads, number of pages, and categories and updates the database. The second is Goodreads. Now that the database has a book’s ID on Goodreads, we can scrape it to get a summary or a description and add that to our table as well. Both of these are RESTful APIs and the built-in “requests” library has been used to make HTTP calls and handle the responses. A Python wrapper for the Goodreads API exists and can be installed via “pip”, however, while the option was explored, it does not provide much in terms of functionality and flexibility. Running the two seeding scripts takes some time. However, the tradeoff has been accepted for a couple of reasons: it only has to be run once, after the database is created, our models will use real data, making the interaction with the application more valuable, and a larger dataset is needed in order to properly train any machine learning algorithm.

Databases and the Relational Model vs NoSQL

MySQL has been chosen to act as the data persistence layer of the Noesis application. The two most popular categories of databases are relational (or SQL) and non-relational (document model, or NoSQL).
A relational database is the traditional collection of tables, with their rows and columns, that most people think of when they hear the term “database”. The data is, therefore, structured, with a strongly imposed schema-on-write, which presents the obvious advantage that outputs and inputs are predictable and consistent. Relationships can be established between tables, ensuring that even highly complex queries with the help of SQL (Structured Query Language) have high performance. For the sake of preserving the consistency of said relationships, SQL databases impose referential integrity when it comes to transactions.This is ensured by their ACID properties: atomicity, consistency, isolation, and durability. A relevant example for what part of that means is that an entry cannot be inserted into a table B that is being referenced in a table A by a foreign key constraint if there is no data in A for that entry to point to. Cascading operations, such as cascade delete and cascade update, similarly help maintain referential integrity. Relational databases are the ‘tried-and-tested’ approach to data persistence. They represent a mature technology that has had plenty of time for fine tuning and are great at high-performance workloads, able to do complex queries thanks to SQL’s ‘join’ ability. They are also compatible with many available tools. However, since no system is perfect, they present some flaws as well: they do not scale well and considerable time has to be invested into the proper designing and maintaining of a relational database server. Complications may also arise when it comes to replication and fault-tolerance, since the relational model is not inherently distributed. Furthermore, it is not always the case that structured data brings value or is even desirable from the point of view of business needs. Such circumstances require the analysis of a NoSQL database option. A NoSQL database stores data in a semi-structured or unstructured manner in one of four ways: graphs, wide-columns, key-value pairs, and documents.

A graph database uses the analogous concept from data structures, namely a graph with nodes and edges to represent its data. It is extremely useful when the value arises from the relationships between entries rather than anything else. The second type, a wide-column store, does use rows and columns organized into tables, however, their structure is not set across an entire table and can differ from row to row. A key-value store is similar to a Dictionary in Python or a HashMap in other programming languages, in that a key that is unique in the collection, identifies an object or a compound object and, just like in the previous case, the schema can differ from entry to entry. Lastly, and perhaps the most widely-used, a document database saves data in a format that is very similar to JSON objects. Not coincidentally, mongoDB, an implementation of such a datastore, was written in JavaScript. The document example on the righthand side is taken from a mongoDB collection and illustrates how data is saved in the database. Objects can be nested or referenced, as can be seen from the example, however there is no imposition on that fields each document should have, that can only happen at application/code level.

Non-relational databases are extremely good at handling caching data, user generated data, or anything that is inherently semi-structured or schema-less, such as data coming from sensors in IoT (Internet of Things) applications. Also, they tend to do better in the cloud and they scale horizontally quite easily. Their flexibility allows for a faster development process, compared to relational databases, and it also means that they do not require such a high time investment in the designing process. Some of them are easy to set up across multiple servers as well as allowing for auto-sharding (like mongoDB), while others can support ACID-like transactions.

All this being considered, consistency is either weaker or non-existent and the case is no different for data integrity. This aspect has many ramificating consequences, such as losing the ability to do complex joins. As previously mentioned, relational databases are much better at handling complex queries. To achieve similar results in a NoSQL environment, several queries have to be made. Adding the fact that it is difficult to track schema changes over time, that mass updates are slow, or that the technology is still maturing means that great consideration needs to be given when picking either of the two database types.

Since a relational model suits our data driven application quite well, the next step in the deciding process was the choosing of an RDBMS (Relational Database Management System). Needless to say that Microsoft’s SQL Server would have been a perfect pick for the .NET-including stack used in building Noesis, however, it is not open-licensed software and the present application has been developed on macOS. SQL Server could have only been used on macOS computers inside virtual machines, although, since its 2017 version, this no longer seems to be the case. Docker can be used to run SQL Server on macOS and Linux machines, however, this work-around increases complexity and there are some limitations as well. In light of this, MySQL was chosen primarily because it is open-source and cross-platform. It can be installed from package managers, provides a CLI (Command Line Interface) tool, and there are a couple of GUI software that can be used to interact with the databases.

The schema was not designed in the database itself, by the an ORM (Object Relational Mapper) provided by ASP.NET Core.
At its core an ORM converts entries in a database to programming language objects and vice versa, however it can also be used to model the data, define relationships and impose constraints.

--

--

Negoiţă D. D. Felix

Software Developer 💻🎧 #coding | Data Engineering Philologist and lover of #books 📚 | Tech enthusiast 📱 https://negofelix.com | Join on Insta: @felix.negoita