User:Boris Tsirelson/Sandbox

Modern mathematics treats function quite differently than classical mathematics did. The differences are listed below; their origin and meaning are explained afterwards.

Birth and infancy of the idea
Some tables compiled by ancient Babylonians may be treated now as tables of some functions. Also, some arguments of ancient Greeks may be treated now as integration of some functions. Thus, in ancient times some functions were used (implicitly). However, they were not recognized as special cases of a general notion.

Further progress was made in the 14th century. Two "schools of natural philosophy", at Oxford (William Heytesbury, Richard Swineshead) and Paris (Nicole Oresme), trying to investigate natural phenomena mathematically, came to the idea that laws of nature should be formulated as functional relations between physical quantities. The concept of function was born, including a curve as a graph of a function of one variable, and a surface — for two variables. However, the new concept was not yet widely exploited either in mathematics or in its applications. Linear functions were well understood, but nonlinear functions remained intractable, except for few isolated marginal cases.

The name "function" was assigned to the new concept later, in 1698, by Johann Bernoulli and Gottfried Leibniz, and published by Bernoulli in 1718.

Power series
The sum of the geometric series
 * $$ 1+x+x^2+x^3+\dots = \frac1{1-x} $$

was calculated by Archimedes, but only for x=1/4, since only this value was needed, and of course not written in this form, since algebraic notation appeared only in the 16th century. New wonderful formulas with infinite sums were discovered (and repeatedly rediscovered) in the 14–17 centuries: for arctangent,
 * $$ \arctan x = x - \frac{x^3}3 + \frac{x^5}5 - \dots $$

(Madhava of Sangamagramma, around 1400; James Gregory, 1671); for logarithm,
 * $$ \log (1+x) = x - \frac{x^2}2 + \frac{x^3}3 - \dots $$

(Nicholas Mercator, 1668); and many others (Isaac Barrow, Isaac Newton, Gottfried Leibniz, ...) Nonlinear functions, desperately needed for the study of motion (Johannes Kepler, Galileo Galilei) and geometry (Pierre Fermat, René Descartes), became tractable via such infinite sums now called power series. "Newton understood by analysis the investigation of equations by means of infinite series. In other words, Newton's basic discovery was that everything had to be expanded in infinite series." "These studies [on power series] stand in the same relation to algebra as the studies of decimal fractions to ordinary arithmetic. (Newton)" Power series became a de facto standard of function, since on one hand, all functions needed in applications were successfully developed into power series, and on the other hand, only functions developed into power series were tractable in the theory. It was not unusual, to claim a theorem for an arbitrary function, and then, in the proof, to consider its development into a power series.

Trigonometric series
In Newtonian mechanics, coordinates of moving bodies are functions of time. For example, the classical equation for a falling body; its height h at a time t is
 * $$ h = f(t) = h_0 - 0.5 g t^2 $$

(here h0 is the initial height, and g is the acceleration due to gravity). Infinitely many corresponding values of t and h are embraced by a single function f.

The instantaneous shape of a vibrating string is described by a function (the displacement y as a function of the coordinate x), and this function changes in time:
 * $$ y = f_t (x). $$

Infinitely many functions ft are embraced by a single function f of two variables,
 * $$ y = f(x,t). $$

After some speculations by Galileo and mathematical interpretation by Brook Taylor (1715/1717) and Johann Bernoulli (1727), the mathematical theory of vibrating string was started by Jean d'Alembert (1746/1749). His approach is equivalent to a partial differential equation written out by Leonhard Euler in 1755,
 * $$\frac{\partial^2}{\partial t^2} f(x,t) = \frac{\partial^2}{\partial x^2} f(x,t),$$

now well-known as the one-dimensional wave equation. D'Alembert found a solution as the superposition of two waves, one traveling to the right, the other to the left:
 * $$ f(x,t) = \phi(x+t) + \psi(x-t). $$

The initial shape of the string is given by the function f0. It was a controversial question in the 18th century, whether f0 must develop into a power series, or not necessarily.

D'Alembert held the opinion that the de-facto standard mentioned above still applies; f0 must be represented by a single equation. (He changed his opinion in 1780.)

The old standard was repudiated by Euler in 1744. He introduced "mixed" functions, given by different equations on two or more intervals. Moreover, he admitted functions that do not comply with any analytical law, whose graphs are traced by a free stroke of the hand.

Physically, the vibrating string may be thought of as an infinite collection of non-interacting harmonic oscillators (vibratory modes, harmonics). This idea, previously used by Euler in some special cases, turned into a general method of solving the wave equation by Daniel Bernoulli (1755). To this end the initial function has to be developed into a trigonometric series
 * $$ f_0(x) = c_1 \sin x + c_2 \sin 2x + c_3 \sin 3x + \dots $$

It was unclear, how many functions can be so developed. D. Bernoulli believed that a trigonometric series is as general as a power series. Both d'Alembert and Euler believed that a trigonometric series is less general than a power series. The truth was revealed only in the 19th century: a trigonometric series is much more general than a power series!

Heat conduction is physically very different from vibrating string, but mathematically it is again about a function that changes in time, and leads to another partial differential equation
 * $$\frac{\partial}{\partial t} f(x,t) = \frac{\partial^2}{\partial x^2} f(x,t),$$

now well-known as the one-dimensional heat equation. It was first investigated by Joseph Fourier (1807/1822); a general solution was found in the form
 * $$f(x,t) = c_1 e^{-t} \sin x + c_2 e^{-4t} \sin 2x + c_3 e^{-9t} \sin 3x + \dots$$

According to Fourier, the initial function may be mixed, in which case it cannot be developed into a power series, and nevertheless, can be developed into a trigonometric series. Fourier presented examples illustrated by graphs, but no proof. Some mathematicians (including Lagrange) found it unbelievable. Others started to investigate the problem using new approaches developed by Cauchy. According to Dirichlet, a "mixed" function can indeed be developed into a trigonometric series, provided that it is composed of finitely many bounded, continuous, monotone "pieces".

Function
The mathematical concept of a function (also called a mapping or map) expresses dependence between two quantities, one of which is given (the independent variable, argument of the function, or its "input") and the other (the dependent variable, value of the function, or "output") is uniquely defined by the input.

A function associates a single output with every input element drawn from a fixed set, the domain of definition or simply domain. The set in which values may be taken is the codomain. The set of all resulting output values that actually occur is called the range or image of the function: the image is a subset of the codomain, but need not be the whole of it.

One important concept in mathematics is function composition: if z is a function of y and y is a function of x, then z is a function of x. This can be described informally by saying that the composite function is obtained by using the output of the first function as the input of the second one. This feature of functions distinguishes them from other mathematical constructs, such as numbers or figures.

In most mathematical fields, the terms operator, operation, and transformation are synonymous with function. However, in some contexts they may have a more specialized meaning. In particular, they often apply to functions whose inputs and outputs are elements of the same set. For example, we speak of linear operators on a vector space, which are linear transformations from the vector space into itself.

Special classes of function

 * An injective function f has the property that if $$x_1 \neq x_2$$ then $$f(x_1) \neq f(x_2)$$;
 * A surjective function f has the property that for every y in the codomain there exists an x in the domain such that $$f(x) = y$$;
 * A bijective function is one which is both surjective and injective.

Functions in set theory
In set theory, functions are regarded as a special class of relation. A relation between sets X and Y is a subset of the Cartesian product, $$R \subseteq X \times Y$$. We say that a relation R is functional if it satisfies the condition that every $$x \in X$$ occurs in exactly one pair $$(x,y) \in R$$. In this case R defines a function with domain X and codomain Y. We then define the value of the function at x to be that unique y. We thus identify a function with its graph.

Associated sets
Let f:X → Y be a function with domain X and codomain Y. The image of a subset A of X is $$f[A] = \{ f(x) : x \in A \}$$; the image of f is the image of X under f. The pre-image of a subset B of Y is $$f^{-1}[B] = \{ x \in X : f(x) \in B \}$$. The fibre of f over a point y in Y is the preimage of the singleton {y}. The kernel of f is the equivalence relation on X for which the equivalence classes are the fibres of f.

Associated functions
If f is a function from a set X to a set Y, there are several functions associated with f.

If S is a subset of X, the restriction of f to S is the function from S to Y which is given by applying f only to elements of S. The restriction may have different properties to the original. Consider the function $$f : x \mapsto x^2$$ from the real numbers R to R. The restriction of f to the positive real numbers is injective, whereas f is not.

The push-forward of f is the function $$f_\vdash$$ from the power set of X to that of Y which maps a subset A of X to its image in Y:


 * $$ f_\vdash(A) = \{ f(x) : x \in A \} . \, $$

An alternative notation for $$f_\vdash(A)$$ is $$f[A]$$ (note the square brackets).

The pull-back of f is the function $$f^\dashv$$ from the power set of Y to the power set of X which maps a subset B of Y to its pre-image in X:


 * $$ f^\dashv(B) = \{ x \in X : f(x) \in B \} . \, $$

An alternative notation for $$f^\dashv(B)$$ is $$f^{-1}[B]$$ (note the square brackets). Pull-back is a generalised form of inverse, and makes sense whether or not f is an invertible function.