Is it possible to change the output alias in pydantic?

We Are Going To Discuss About Is it possible to change the output alias in pydantic?. So lets Start this Python Article.

Is it possible to change the output alias in pydantic?

  1. How to solve Is it possible to change the output alias in pydantic?

    Switch aliases and field names and use the allow_population_by_field_name model config option:
    class TMDB_Category(BaseModel): strCategory: str = Field(alias="name") strCategoryDescription: str = Field(alias="description") class Config: allow_population_by_field_name = True
    Let the aliases configure the names of the fields that you want to return, but enable allow_population_by_field_name to be able to parse data that uses different names for the fields.

  2. Is it possible to change the output alias in pydantic?

    Switch aliases and field names and use the allow_population_by_field_name model config option:
    class TMDB_Category(BaseModel): strCategory: str = Field(alias="name") strCategoryDescription: str = Field(alias="description") class Config: allow_population_by_field_name = True
    Let the aliases configure the names of the fields that you want to return, but enable allow_population_by_field_name to be able to parse data that uses different names for the fields.

Solution 1

Switch aliases and field names and use the allow_population_by_field_name model config option:

class TMDB_Category(BaseModel):
    strCategory: str = Field(alias="name")
    strCategoryDescription: str = Field(alias="description")

    class Config:
        allow_population_by_field_name = True

Let the aliases configure the names of the fields that you want to return, but enable allow_population_by_field_name to be able to parse data that uses different names for the fields.

Original Author Hernán Alarcón Of This Content

Solution 2

An alternate option (which likely won’t be as popular) is to use a de-serialization library other than pydantic. For example, the Dataclass Wizard library is one which supports this particular use case. If you need the same round-trip behavior that Field(alias=...) provides, you can pass the all param to the json_field function. Note that with such a library, you do lose out on the ability to perform complete type validation, which is arguably one of pydantic’s greatest strengths; however it does, perform type conversion in a similar fashion to pydantic. There are also a few reasons why I feel that validation is not as important, which I do list below.

Reasons why I would argue that data validation is a nice to have
feature in general:

  • If you’re building and passing in the input yourself, you can most likely trust that you know what you are doing, and are passing in the correct data types.
  • If you’re getting the input from another API, then assuming that API has decent docs, you can just grab an example response from their documentation, and use that to model your class structure. You generally don’t need any validation if an API documents its response structure clearly.
  • Data validation takes time, so it can slow down the process slightly, compared to if you just perform type conversion and catch any errors that might occur, without validating the input type beforehand.

So to demonstrate that, here’s a simple example for the above use case using the dataclass-wizard library (which relies on the usage of dataclasses instead of pydantic models):

from dataclasses import dataclass

from dataclass_wizard import JSONWizard, json_field


@dataclass
class TMDB_Category:
    name: str = json_field('strCategory')
    description: str = json_field('strCategoryDescription')


@dataclass
class TMDB_GetCategoriesResponse(JSONWizard):
    categories: list[TMDB_Category] 

And the code to run that, would look like this:

input_dict = {
  "categories": [
    {
      "strCategory": "Beef",
      "strCategoryDescription": "Beef is ..."
    },
    {
      "strCategory": "Chicken",
      "strCategoryDescription": "Chicken is ..."
    }
  ]
}

c = TMDB_GetCategoriesResponse.from_dict(input_dict)
print(repr(c))
# TMDB_GetCategoriesResponse(categories=[TMDB_Category(name='Beef', description='Beef is ...'), TMDB_Category(name='Chicken', description='Chicken is ...')])

print(c.to_dict())
# {'categories': [{'name': 'Beef', 'description': 'Beef is ...'}, {'name': 'Chicken', 'description': 'Chicken is ...'}]}

Measuring Performance

If anyone is curious, I’ve set up a quick benchmark test to compare deserialization and serialization times with pydantic vs. just dataclasses:

from dataclasses import dataclass
from timeit import timeit

from pydantic import BaseModel, Field

from dataclass_wizard import JSONWizard, json_field


# Pydantic Models
class Pydantic_TMDB_Category(BaseModel):
    name: str = Field(alias="strCategory")
    description: str = Field(alias="strCategoryDescription")


class Pydantic_TMDB_GetCategoriesResponse(BaseModel):
    categories: list[Pydantic_TMDB_Category]


# Dataclasses
@dataclass
class TMDB_Category:
    name: str = json_field('strCategory', all=True)
    description: str = json_field('strCategoryDescription', all=True)


@dataclass
class TMDB_GetCategoriesResponse(JSONWizard):
    categories: list[TMDB_Category]


# Input dict which contains sufficient data for testing (100 categories)
input_dict = {
  "categories": [
    {
      "strCategory": f"Beef {i * 2}",
      "strCategoryDescription": "Beef is ..." * i
    }
    for i in range(100)
  ]
}

n = 10_000

print('=== LOAD (deserialize)')
print('dataclass-wizard: ',
      timeit('c = TMDB_GetCategoriesResponse.from_dict(input_dict)',
             globals=globals(), number=n))
print('pydantic:         ',
      timeit('c = Pydantic_TMDB_GetCategoriesResponse.parse_obj(input_dict)',
             globals=globals(), number=n))

c = TMDB_GetCategoriesResponse.from_dict(input_dict)
pydantic_c = Pydantic_TMDB_GetCategoriesResponse.parse_obj(input_dict)

print('=== DUMP (serialize)')
print('dataclass-wizard: ',
      timeit('c.to_dict()',
             globals=globals(), number=n))
print('pydantic:         ',
      timeit('pydantic_c.dict()',
             globals=globals(), number=n))

And the benchmark results (tested on Mac OS Big Sur, Python 3.9.0):

=== LOAD (deserialize)
dataclass-wizard:  1.742989194
pydantic:          5.31538175
=== DUMP (serialize)
dataclass-wizard:  2.300118940
pydantic:          5.582638598

In their docs, pydantic claims to be the fastest library in general, but it’s rather straightforward to prove otherwise. As you can see, for the above dataset pydantic is about 2x slower in both the deserialization and serialization process. It’s worth noting that pydantic is already quite fast, though.

Original Author rv.kvetch Of This Content

Solution 3

maybe you could use this approach

from pydantic import BaseModel, Field


class TMDB_Category(BaseModel):
    name: str = Field(alias="strCategory")
    description: str = Field(alias="strCategoryDescription")


data = {
    "strCategory": "Beef",
    "strCategoryDescription": "Beef is ..."
}


obj = TMDB_Category.parse_obj(data)

# {'name': 'Beef', 'description': 'Beef is ...'}
print(obj.dict())

Original Author Yevhen Dmytrenko Of This Content

Conclusion

So This is all About This Tutorial. Hope This Tutorial Helped You. Thank You.

Also Read,

ittutorial team

I am an Information Technology Engineer. I have Completed my MCA And I have 4 Year Plus Experience, I am a web developer with knowledge of multiple back-end platforms Like PHP, Node.js, Python and frontend JavaScript frameworks Like Angular, React, and Vue.

Leave a Comment