Overriding Comparison Operators in Python

45 minutes
  • 3 Learning Objectives

About this Hands-on Lab

When modeling systems in any programming language, we occasionally need our custom objects to be comparable. This is useful for when the class encapsulates a calculation based on some attributes and we want to sort them, or if we want to make direct comparisons to know if two instances are equivalent. In Python, we implement this sort of behavior by defining a suite of double underscore (“dunder”) methods on our classes. These allow our custom class instances to be compared using the same operators that we would use if we wanted to know if one integer was greater than or equal to the next. In this hands-on lab, you’ll be building a class to model a `Lead` for your sales team so that they can compare and rank leads in the system.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Create the Lead Model and Calculate Lead Score

Before you can start working on the sorting of leads, you need to have a model. Your coworker has requested that you create the Lead class within the lead.py file next to test_lead.py. Be sure to create the initializer for the class and have it take the proper arguments:

  • name
  • staff_size
  • estimated_revenue
  • effort_factor

With these implemented, you’ll also want to add a method (or calculated property) to return the calculated lead score. Here’s the formula to use:

1 / (staff_size / estimated_revenue * (10 ** (digits_in_revenue - digits_in_staff_size)) * effort_factor)

The automated tests assume that lead_score is a method, but you can adjust the test file if you’d rather it be calculated property instead.

You can run the unit tests with the following command:

python -m unittest
Implement the Equivalence Comparison Operators

The first set of operators that you need to implement are the == and != operators. These should compare the lead score of the two Lead instances being compared. You can implement these using the __eq__ and __ne__ methods, and these methods should compare the lead scores of the instances. Implementing these methods should make a few more automated tests pass.

You can run the unit tests with the following command:

python -m unittest
Implement the Ordering Operators

With the equivalence operators implemented, you’re ready to move onto implementing the ordering operators: <, >, <=, and >=. These operators can be implemented by defining the __lt__, __gt__, __le__, and __ge__ methods, and these methods should compare the lead scores of the instances. Implementing these methods should get the remaining automated tests to pass.

You can run the unit tests with the following command:

python -m unittest

Additional Resources

You and your team are currently in the process of building out a customer relationship management system (CRM) for your sales team, and one of the big feature requests is the ability to sort leads by their lead score. Calculating the score is relatively simple, and you know that throughout the code for this system, you'll need to compare two different leads. You've decided that now is the time to implement all of the comparison operators for your Lead class so that using the class feels natural everywhere within your system. To start, a member of your team has already created a unit test file called test_lead.py and has asked you to build the class within lead.py, making sure that you get all of the automated tests to pass.

To begin, the class only needs a few attributes:

  • name - The name of the company.
  • staff_size - The number of employees at the company.
  • estimated_revenue - The estimated yearly revenue of the company.
  • effort_factor - The amount of effort that a member of the sales team think it will take to make the sale as a float between 1.0 and 10.0 (inclusive)

Using staff_size, estimated_revenue, and effort_factor, you want to calculate the lead score using the following formula:

1 / (staff_size / estimated_revenue * (10 ** (digits_in_revenue - digits_in_staff_size)) * effort_factor)

While not the most scientific formula in the world, your sales team thinks that this should give them a single number that will help them know which leads to pursue before others. A higher lead score indicates a higher return on effort.

You can run the unit tests with the following command:

$ python -m unittest
.........
----------------------------------------------------------------------
Ran 9 tests in 0.000s

OK

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?