A titration (titrimetry) is a technique where a solution of
known concentration is used to determine the concentration of an unknown
solution. Typically, the titrant (the know solution) is added from a buret to a known quantity of the analyte (the
unknown solution) until the reaction is complete. Knowing the volume of titrant
added allows the determination of the concentration of the unknown. Often, an
indicator is used to usually signal the end of the reaction, the endpoint.
Titrations can be classified by the
type of reaction. Different types of titration reaction include:
- Acid-base titrations are based on the neutralization reaction between the analyte and an acidic or basic titrant. These most commonly use a pH indicator, a pH meter, or a conductance meter to determine the endpoint.
- Redox titrations are based on an oxidation-reduction reaction between the analyte and titrant. These most commonly use a potentiometer or a redox indicator to determine the endpoint. Frequently either the reactants or the titrant have a colour intense enough that an additional indicator is not needed.
- Complexometric titrations are based on the formation of a complex between the analyte and the titrant. The chelating agent EDTA is very commonly used to titrate metal ions in solution. These titrations generally require specialized indicators that form weaker complexes with the analyte. A common example is Eriochrome Black T for the titration of calcium and magnesium ions.
- A zeta potential titration characterizes heterogeneous systems, such as colloids. Zeta potential plays role of indicator. One of the purposes is determination of iso-electric point when surface charge becomes 0. This can be achieved by changing pH or adding surfactant. Another purpose is determination of the optimum dose of the chemical for flocculation or stabilization
- · Gas phase titration
Gas
phase titrations are titrations done in the gas phase, specifically as methods
for determining reactive species by reaction with an excess of some other gas,
acting as the titrant. In one common gas phase titration, gaseous ozone is
titrated with nitrogen oxide according to the reaction
O3
+ NO → O2 + NO2.
After
the reaction is complete, the remaining titrant and product are quantified
(e.g., by FTIR); this is used to
determine the amount of analyte in the original sample.
Gas
phase titration has several advantages over simple spectrophotometry. First,
the measurement does not depend on path length, because the same path length is
used for the measurement of both the excess titrant and the product. Second,
the measurement does not depend on a linear change in absorbance as a function
of analyte concentration as defined by the Beer Lambard Law, Third, it is
useful for samples containing species which interfere at wavelengths typically
used for the analyte.