Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (2024)

Dhanika Thathsara Munasinghe

Software Developer with 6+ years experience in .NET | Angular | SQL | Azure

  • Report this post

3 ways to add test data

Like Comment

To view or add a comment, sign in

More Relevant Posts

  • Md. Sujon Parvez

    Developer @Vivasoft | C# | .NET Core | SQL Server | Angular | Vue.js | jQuery | Typescript

    • Report this post

    When to Use List over IQueryable in C#?In C#, both List and IQueryable are collections that serve different purposes and offer distinct advantages. Understanding when to use each is crucial for developing efficient and maintainable applications. 1. In-Memory Operations:List is an in-memory collection that stores data in a contiguous block of memory. It provides various methods like Add, Remove, Find, and Sort, making it suitable for in-memory operations.2. Small Data Sets:If you're dealing with small data sets where the performance difference is negligible, using List is generally sufficient. Since List is an in-memory collection, it avoids the overhead of querying a database or remote data source, making it more efficient for smaller data volumes.In conclusion, List is preferable over IQueryable in scenarios where your data is entirely in memory, the data set is relatively small, and the data manipulations are simple. It provides a more straightforward and efficient approach for in-memory operations and eliminates the need for complex querying when it is unnecessary.However, keep in mind that IQueryable is designed for querying large datasets from remote data sources (like databases) and applying filters, projections, and other operations at the database level. For such cases, IQueryable should be used to leverage the advantages of deferred execution and optimized querying.

    17

    Like Comment

    To view or add a comment, sign in

  • Stefan Đokić

    ➡️ I help you to improve your .NET knowledge! | Microsoft MVP‎

    • Report this post

    𝐃𝐨 𝐲𝐨𝐮 𝐤𝐧𝐨𝐰 𝐰𝐡𝐚𝐭 .𝐇𝐚𝐬𝐃𝐞𝐟𝐚𝐮𝐥𝐭𝐒𝐜𝐡𝐞𝐦𝐚() 𝐢𝐬?When should we use it? 👇 HasDefaultSchema in EF Core is a method used to set the default database schema for all the entities in the model. In a relational database, a schema is a way to organize and group database objects, like tables, views, and stored procedures, under a single identifier. This is particularly useful in scenarios where multiple applications or services share the same database but need to have their tables and objects separated for clarity and security.By default, EF Core maps the entities to the database's default schema (like dbo in SQL Server). However, if you want to organize your tables into a different schema, you can use HasDefaultSchema. This is done in the OnModelCreating method of your DbContext class. In this example, 𝐚𝐩𝐩𝐒𝐜𝐡𝐞𝐦𝐚 𝐰𝐨𝐮𝐥𝐝 𝐛𝐞 𝐭𝐡𝐞 𝐝𝐞𝐟𝐚𝐮𝐥𝐭 𝐬𝐜𝐡𝐞𝐦𝐚 for all entities mapped in this context, unless explicitly configured otherwise. This means that when the database is created or updated, all tables will be created under the appSchema schema instead of the default schema.This feature is useful for maintaining a clean and organized database, especially in complex systems with many entities. It's also beneficial when implementing a𝐦𝐮𝐥𝐭𝐢-𝐭𝐞𝐧𝐚𝐧𝐭 𝐚𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 where each tenant might have its own schema for data isolation.Did somebody have a chance to use it?P.S. If you like the post, be sure to check out my .NET Pro Weekly Newsletter.Join 10,700+ engineers here: https://lnkd.in/d9ZK8Cfu__Explore the comprehensive 𝐆𝐫𝐚𝐩𝐡𝐐𝐋 𝐬𝐮𝐩𝐩𝐨𝐫𝐭 𝐨𝐟𝐟𝐞𝐫𝐞𝐝 𝐛𝐲 𝐏𝐨𝐬𝐭𝐦𝐚𝐧. They also offer a feature for storing examples of request/response pairs, which is a great way to demonstrate the functionality of your GraphQL requests.Take a look here: https://lnkd.in/dEb9FWTR

    • Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (6)

    152

    11 Comments

    To view or add a comment, sign in

  • Spandan Sarkar

    Associate Software Engineer at ERA-InfoTech Ltd. (Azure DevOps | Angular | .Net Web API | ORACLE | MongoDB | MySQL | NodeJS)

    • Report this post

    .Net Core Tips:The choice between using .NET Core's bulk insert methods and a foreach loop depends on your specific use case and performance requirements.1. Bulk Insert: - Bulk insert methods, like `BulkInsert` or `SqlBulkCopy` in .NET Core, are typically more efficient when inserting a large number of records into a database. - They leverage database-specific optimizations and can significantly reduce the number of individual database round-trips, leading to better performance. - Bulk insert is a good choice when you need to insert thousands or millions of records quickly.2. Foreach Loop: - A foreach loop allows you to insert records one at a time, providing more control over each insert operation. - It may be more suitable when you need to perform some logic or validation for each record before insertion. - If you have a relatively small number of records to insert, a foreach loop might be simpler to implement.In summary, if you need to insert a large number of records quickly and performance is a primary concern, consider using a bulk insert method. If you have a smaller number of records or need to perform individual record-level operations, a foreach loop may be more appropriate. Additionally, you can also consider hybrid approaches, like batching records in groups and using bulk insert for each batch, to strike a balance between performance and control.#aspdotnet #entityframeworkcore #backend https://lnkd.in/gGAhb3W7https://lnkd.in/gEwxq7Zhhttps://lnkd.in/gTyT9-DJ

    EF Core Bulk Insert learnentityframeworkcore.com

    6

    1 Comment

    Like Comment

    To view or add a comment, sign in

  • Łukasz Żabski

    Posting about .NET | Software Engineer @ Sembo

    • Report this post

    How to easily split data into batches with LINQ?Have you heard about the LINQ Chunk method?It extends the IEnumerable<T> type and allows you to split the elements of a sequence into batches.Why does it matter? 🔎Because handling data in batches can improve your process performance.Imagine a large collection of IDs that you need to process by:👉 Supplementing it with data from several database tables,👉 Fetching more data from another service,👉 Finally, saving it to the Elasticsearch index.Batch processing data reduces the risk of overwhelming your system compared to handling all data at once.Another example - batching multiple smaller API/DB requests into bigger ones to reduce unnecessary calls.Some of the best optimizations I've witnessed included batching the data and removing unnecessary calls.But remember - optimizations like this should be based on application metrics.They are the best indicators if your effort has improved the process.What are your thoughts on this topic? 🙂

    • Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (14)

    319

    31 Comments

    Like Comment

    To view or add a comment, sign in

  • Suday Kumer Ghosh

    .NET | .NET Core | Sql Server | AWS, Azure | Microservice | Angular | React | React Native | JavaScript | WordPress

    • Report this post

    Before design the relational database by using EF Core, .NET, we should keep few things in mind:1. Should not use too many columns in a table. Idle thing is, it can be 50 fields. We will split into multiple table to make one-to-one relation if need to keep more than 50 columns.2. We will specify columns length specially for string fields. Otherwise EF core of .NET will automatically set 'max' length which is degrade database performance.3. We will not mixing normal data (eg. char, number or float etc) and image data in a table. We should keep separate table to store image data.

    9

    Like Comment

    To view or add a comment, sign in

  • Gersi M.

    Software Engineer, Systems Generalist | B2B SaaS | Data Management, Back-End Development

    • Report this post

    Avoid using GenerationType.AUTO Strategy for ID generation in Spring BootGenerationType.IDENTITY: This strategy relies on the auto-increment functionality provided by the database to generate unique identifier values automatically.GenerationType.SEQUENCE: With this strategy, the @GeneratedValue annotation fetches unique identifier values from a predefined sequence generator.GenerationType.TABLE: In the table-based strategy, the annotation utilizes a separate table to manage and generate unique identifier values.GenerationType.AUTO: This strategy lets the JPA provider choose the most appropriate strategy based on the underlying database and configuration. It typically maps to either IDENTITY or SEQUENCE, depending on database capabilities.GenerationType.UUID: While not a standard JPA strategy, some frameworks and libraries may offer a custom strategy for generating universally unique identifiers (UUIDs) for entities.The issue with using GenerationType.AUTO is that it can lead to inconsistent behaviour across different database systems and configurations.

    1

    2 Comments

    Like Comment

    To view or add a comment, sign in

  • Steve Officer

    I solve problems using technology

    • Report this post

    Things developers need to remember about REST part 2 of many.Following on from yesterday's post about how the resources in REST are not a 1:1 mapping to the system's representation of how the data is saved.REST is also not about CRUD. We aren't just building a different front-end on top of the database.If that's all we're doing then there is no need to build an API, SQL already exists.Instead the APIs should be doing something more. They should be allowing users to manipulate the state of a service by interacting with the concepts those service are built upon.The fact that there is/isn't a database is not the point. That is just a mechanism used to persist state.It's the concepts and how we allow them to be interacted with that we should be focusing on. This is much more powerful than plain old CRUD.This is because complex systems never just "update" state. What is the context that is driving that update, how do we represent not only the change but the context that also drove that change? That's where business value comes from.

    3

    Like Comment

    To view or add a comment, sign in

  • sudhindra km

    C++ Architect at Wipro Limited

    • Report this post

    Using STL in C++ APIs to push data to database/Binary database structures with high performance.Whenever there is huge data say model files or any car mesh models or image metadata, there is always necessity to read them and use C++ STL Containersand push data into database. Database need not be only established Library products like SQL, there will be scenarios where only the database API are already available developed by other core teamsand you need to push these model datas into the database structures or Data base tables like for example-car model coordinates, their stress -strain values as in case of CAE domain ORImage metadata values stored in the images inside binary image files. STL containers play big role where you need to select which type of STL you want to use.After storing the datas in the STL containers, you may select vector in case you have lot of look ups in the sorted vector or want to access each element and want to apply some changes on each element Or when you want to pass List of data to a underlying C programs OR when size of data been read is not available before hand -then Vector plays a important role for data storage and accessing.You may consider to use maps that are associative containers when you want to store data with respect to keys which in turn the keys can be used to find values based on these keys efficiently Or when you want to store dictionary words as keys and its meanings as values OR when you want to count occurrences of a number /words in a list.There are many STL containers like vectors,maps ,lists, setsand its usage is decided based on the requirement and its performance.efficiently. Thus, to conclude, STL is widely used for storage and managing datas efficiently which is important part of software development of any product to manage datas efficiently.#cpp #cplusplus

    Like Comment

    To view or add a comment, sign in

  • Farooq Ahmad

    Senior Software Engineer | React | ASP.NET Core | MVC | C# | Web API |SQL Server | JavaScript | HealthCare IT | HTML | CSS | Clean Architecture | DDD | microservices | gRPC

    • Report this post

    Are you confused about when to use DTO records or classes in C#? Let's shed some light on this topic to help you make the right choice.DTO records and classes both have their own advantages and use cases, so it's essential to understand which one fits your specific scenario. Here's a breakdown:1. Simplicity and immutability: DTO records are simple data structures with read-only properties, making them ideal for transferring data between different layers of an application.2. Flexibility and behavior: Classes, on the other hand, provide more flexibility and can have additional behavior beyond just holding data. They are suitable for situations where you need to encapsulate data along with methods for manipulation.3. Performance considerations: DTO records are usually more performant due to their immutable nature, while classes may introduce some overhead due to reference types and potential mutability.4. Data transformation and mapping: DTO records are commonly used for data transfer and mapping purposes, especially when dealing with web APIs or database interactions.5. Compatibility and serialization: Classes offer better compatibility and serialization options, making them a preferred choice when working with frameworks or libraries that require specific contract-based models.In conclusion, choose DTO records when simplicity, immutability, and data transfer are your primary concerns. Use classes when you need additional behavior, flexibility, and compatibility with other systems. Consider the specific requirements of your application to make an informed decision.

    10

    Like Comment

    To view or add a comment, sign in

  • Abhijeet Mukkawar

    Software Engineering Management | Scaler, Data Structures and Algorithms | Senior Consultant @ Eviden

    • Report this post

    #databasedesignpattern tips #softwareEngineeringWhen manipulating data from databases (those that are not tightly integrated with operating systems or that are loosely coupled databases), how do you choose database design patterns? One of the most successful database design patterns I've come across is DAO (Data Access Object), VO (Value Object), and Key.The typical add/update/delete db activities in databases are handled by the Update, Add, and Delete methods found in DAO classes. It can be altered for other database activities, though. VO Value Object classes typically deal with classes that are blueprints for records from specific tables. For instance, if certain actions must be carried out prior to updating data in databases, data must be extracted into the buffer class so that program can operate and continue to run the changes. Keys are nothing more than operations on data based on a table's key. It becomes simple to software development and management with such a minor adjustment.

    Like Comment

    To view or add a comment, sign in

Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (28)

Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (29)

1,037 followers

  • 36 Posts
  • 2 Articles

View Profile

Follow

More from this author

  • Test DRIVEN Development (TDD) Dhanika Thathsara Munasinghe 6y
  • Create A Database With Test Data Using ASP.NET Core 2.0, Entity Framework Core 2.0 With Code First Approach. Dhanika Thathsara Munasinghe 6y

Explore topics

  • Sales
  • Marketing
  • Business Administration
  • HR Management
  • Content Management
  • Engineering
  • Soft Skills
  • See All
Dhanika Thathsara Munasinghe on LinkedIn: Database seed with EF Core (2024)

References

Top Articles
Latest Posts
Article information

Author: Velia Krajcik

Last Updated:

Views: 6512

Rating: 4.3 / 5 (74 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Velia Krajcik

Birthday: 1996-07-27

Address: 520 Balistreri Mount, South Armand, OR 60528

Phone: +466880739437

Job: Future Retail Associate

Hobby: Polo, Scouting, Worldbuilding, Cosplaying, Photography, Rowing, Nordic skating

Introduction: My name is Velia Krajcik, I am a handsome, clean, lucky, gleaming, magnificent, proud, glorious person who loves writing and wants to share my knowledge and understanding with you.