Two functional inequalities

$\textbf{Problem 1.}$ Find all functions $f:\mathbb{R}\to\mathbb{R}$ such that for all $x$ and $y$ holds

$$f(x)f(x+y)\ge f(x)^2+xy.$$


$\textbf{Problem 2.}$ Find all differentiable functions $f:\mathbb{R}\to\mathbb{R}$ with $f(0)=0,\ f(1)=1$ and such that for all $x$ and $y$ holds

$$f(x+y)\ge 2022^xf(x)+f(y).$$

Haudorff dimension and Liouville numbers

In this post we present a proof of the fact the the set of Liouville numbers contained in $[0,1]$ has Hausdorff dimension $0$. 

Why is this result interesting, apart from finding explicitly the Hausdorff dimension of a particular set? Regarding Lebesgue measure, it is not obvious whether there exists an uncountable set with measure zero. Lioville numbers serve as such example. However, Liouville numbers even serve as an example of an uncountable set with Hausdorff dimension zero, and this is much stronger. That is because every set of dimension zero has measure zero, whereas for example, another common reference of an uncountable set with measure $0$, the Cantor set, has positive Hausdorff dimension ($\ln 2/\ln 3$). Thus the set of Lioville numbers, is in a sense, way smaller than the Cantor set. One could also say that Liouville numbers are example of the smallest possible uncountable sets among the sets with measure zero. You can look here, for another construction of uncountable sets with dimension zero. Another interesting aspect of Liouville numbers is that they are $G_\delta$-dense in $[0,1]$ (which is a property stronger than uncountability). Hence Liouville numbers are big in sense of cardinality and topology, yet among the smallest with respect to measure and dimension.

Preliminary notions. We recall the definition of Hausdorff dimension.
For a subset $U$ of a metric space (e.g. real line), define $\operatorname{diam}(U)$ to be the diameter of $U$, i.e. the supremum of distances between any two elements of $U$. If $U$ would be an interval, that is just its length. Now for a set $X$, for $d\ge 0$ (which would take the role of a dimension) and $\delta>0$ define
$$H_\delta^d(X)=\inf\left \{\sum_{i=1}^\infty (\operatorname{diam} U_i)^d: X\subseteq\bigcup_{i=1}^\infty U_i,\ \operatorname{diam} U_i<\delta\right \}.$$
One observes that this is nonincreasing as a function of $\delta$ (smaller $\delta$ - less covers $\{U_n\}_{n\ge 1}$). Thus we define $$\displaystyle\mathcal{H}^d(X)=\sup_{\delta>0}H_\delta^d(X)=\lim_{\delta\to 0}H_\delta^d(X).$$ This turns out to be a measure (defined at least on the Borel $\sigma$-algebra (the one containing all open sets)); it is called $d$-dimensional Hausdorff measure. One may observe that $\mathcal{H}^d(X)$ is nonincreasing as a function of $s$ with values in $[0,\infty]$, and may obtain at most one finite nonzero value (consider for example subset of $[0,1]$).
Define the Hausdorff dimension as the $d$ for which we switch from measure $\infty$ to measure $0$, i.e.
$$\dim_{\operatorname{H}}{(X)}=\inf\{d\ge 0: \mathcal{H}^d(X)=0\}.$$

To the problem. We want to show that the set of Liouville numbers in $[0,1]$ has dimension $0$. Denote this set by $L$. Recall that by definition  $x\in L $ iff for each positive integer $n$ there exist infinitely many positive integers $p,q$, such that $$\left|x-\frac{p}{q}\right|<\frac{1}{q^n}.$$
Pay attention, the usual definition requires existence of at least one such pair $p,q$; it is easy to observe that if there exists at least one pair for each $n$, then there exist infinitely many pairs for each $n$.

So let us unpack what it means for a set to have dimension $0$. Going back to the above definitions (in reverse order) we see that it suffices to show that for any $s>0$ holds $\mathcal{H}^d(X)=0$. This in turn reduces to showing that for $\delta>0$ holds $H_\delta^d(X)=0$. This means that
$$\inf\left \{\sum_{i=1}^\infty (\operatorname{diam} U_i)^d: X\subseteq\bigcup_{i=1}^\infty U_i,\ \operatorname{diam} U_i<\delta\right \}=0$$
Finally, unwrapping the infimum we may summarise the task to be done as follows:

 For any $d>0$, $\delta>0$ and $\varepsilon>0$ there exists a countable collection $\{U_i\}_{i\ge 1}$  such that: $\displaystyle X\subset \bigcup_{i\ge 1}U_i$, $\operatorname{diam}U_i<\delta$ for all $i$ and
$$ \sum_{i=1}^\infty (\operatorname{diam} U_i)^d<\varepsilon.$$ So fix $d>0$, $\delta>0$ and $\varepsilon>0$. We may assume $d<1$. Fix $\displaystyle n>\frac{3}{d},\ q_0>\max\left\{\frac{2}{\delta},\frac{2}{\varepsilon}\right\}$ and consider
$$I_{p,q}:=\left(\frac{p}{q}-\frac{1}{q^n},\frac{p}{q}+\frac{1}{q^n}\right).$$ One observes that (recalling the modified, still equivalent defintion, we stated) $$L\subset \bigcup_{q>q_0}\bigcup_{p=1}^{q-1}I_{p,q}$$ Moreover $\displaystyle\operatorname{diam}I_{p,q}=\frac{2}{q^n}<\frac{2}{q_0}<\delta.$ Finally
 $$\sum_{q>q_0}\sum_{p=1}^{q-1}(\operatorname{diam}I_{p,q})^d\le\sum_{q>q_0}\frac{2^d}{q^{nd}}q\le 2\sum_{q>q_0}\frac{1}{q^{nd-1}}\le 2\sum_{q>q_0}\frac{1}{q^{2}}\le \frac{2}{q_0}<\varepsilon.$$ In the above we used the inequality $$\sum_{k>n}\frac{1}{k^2}\le \frac{1}{n}$$ which could be proven, for example, by estimating the sum from above by $\displaystyle \int_n^\infty\frac{1}{x^2}dx$.

Thus the proof is finished.

You can also check the wikipedia articles for further information on Hausdorff dimension and Liouville numbers.

Problem proposed by prof. Gadjev

$\textbf{Problem.}$

 Let $f:[0,1]\to\mathbb{R}$ be a continuous function.

Find the limit 

$$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right).$$

 

$\textbf{Solution.}$ 

Fix $\varepsilon>0$. We aim to show that for large enough $n$  

$$\left|\frac{1}{n}\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|<3\varepsilon$$

 which would mean that the limit is $f(1).$

Let $M$ be an upper bound for $|f|$ on $[0,1]$, i.e. $|f(x)|\le M$ for all $x\in [0,1].$

 For all $n\ge 1$ define $$k(n)=\left\lfloor\frac{n}{n^{1/\sqrt{\ln n}}}\right\rfloor.$$

The number is chosen in such a way that $$\lim_{n\to\infty}\frac{k(n)}{n}=0 \ \ \& \ \  \lim_{n\to\infty}\frac{\ln(k(n))}{\ln n}=1.$$

 Split the sum in two:

$$\left|\frac{1}{n}\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|\le \left|\frac{1}{n}\sum_{k=1}^{k(n)} f\left(\frac{\ln k}{\ln n }\right)\right|+\left|\frac{1}{n}\sum_{k=k(n)+1}^{n} f\left(\frac{\ln k}{\ln n }\right)-f(1)\right| \ \ \ (*)$$

 Take $n_1$ such that for $n>n_1$ holds $\displaystyle \frac{k(n)}{n}<\frac{\varepsilon}{M}$. For the first summand on the right, using the triangle inequality, we obtain

$$\left|\frac{1}{n}\sum_{k=1}^{k(n)} f\left(\frac{\ln k}{\ln n }\right)\right|\le \frac{1}{n}\sum_{k=1}^{k(n)} \left|f\left(\frac{\ln k}{\ln n }\right)\right|\le \frac{Mk(n)}{n}<\varepsilon$$

when $n>n_1$. 

Now take $\delta>0$ such that when $|x-1|<\delta$ holds $|f(x)-f(1)|<\varepsilon$. Take $n_2$ such that for $n>n_2$ holds $\displaystyle \left|\frac{\ln(k(n))}{\ln n}-1\right|<\delta.$ Now for $n>\max\{n_1,n_2\}$ we obtain 

 $$\left|\frac{1}{n}\sum_{k=k(n)+1}^{n} f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|\le \sum_{k=k(n)+1}^{n} \frac{1}{n}\left|f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|+\left|\frac{k(n)}{n}f(1)\right|<\frac{n-k(n)}{n}\varepsilon+\varepsilon\le 2\varepsilon$$

 Thus for $n>\max\{n_1,n_2\}$ from $(*)$ we obtain 

$$\left|\frac{1}{n}\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|\le \left|\frac{1}{n}\sum_{k=1}^{k(n)} f\left(\frac{\ln k}{\ln n }\right)\right|+\left|\frac{1}{n}\sum_{k=k(n)+1}^{n} f\left(\frac{\ln k}{\ln n }\right)-f(1)\right|<\varepsilon+2\varepsilon=3\varepsilon.$$

$\textbf{Remark.}$ We only used that the function $f$ is bounded and continuous at $1$. The proof heavily relied on the existence of the function (sequence) $k$. Functions, for which such function $k$ exists (like $\ln$) are called super slowly varying. They arise in Probability theory. The same proof could be carried for any such function. 

The proof proposed from Prof. Gadjev relies on the following equivalences

$$\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right)\equiv\int_1^n  f\left(\frac{\ln x}{\ln n }\right)dx\underbrace{=}_{x=n^y}\int_0^1  f\left(y\right)\ln(n)n^ydy$$

 and consequently $$\lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^n f\left(\frac{\ln k}{\ln n }\right)=\lim_{n\to\infty}\int_0^1  \frac{\ln n}{n}n^y f\left(y\right)dy.$$

 This reasoning could be made rigorous. After that, what remains is to observe that the sequence 

$ \displaystyle\frac{\ln n}{n-1}n^y$ tends to the Dirac Delta function at $1$.

Asymptotics of the solution of a differential equation

Let $f$ be twice continuously differentiable function defined on $\mathbb{R}$, such that $f(x)f''(x)=1$ for all $x\ge 0$, $f(0)=1$ and $f'(0)=0$.

Find $$\lim_{x\to\infty} \frac{f(x)}{x\sqrt{\ln(x)}}.$$

Modification on a sequence from VJIMC

The first two limits of the following problem were proposed at VJIMC, 2005, Category I.

The exact value of the last limit was proposed by a user at https://artofproblemsolving.com , where you can also see his solution along other lines.

 Let $(x_n)_{n\ge 2}$ be a sequence of real numbers, such that $x_2>0$ and for every $n\ge 2$ holds 

$$x_{n+1}=-1+\sqrt[n]{1+nx_n}.$$


Prove consecutively that  $$1)\ \lim_{n\to\infty}x_n=0,\  \ \ 2)\ \lim_{n\to\infty}nx_n=0,\  \ \ 3)\ \lim_{n\to\infty}n^2x_n=4.$$


$\textbf{Proof.}$  $\textbf{1)}$ Clearly all the elements of the sequence are positive. The inequality $-1+\sqrt[n]{1+nx_n}<x_n$ is equivalent to $(1+nx_n)<(1+x_n)^n$, which is seen to be true after expanding, since all the summands on the right are positive. This shows that the sequence is strictly decreasing. Hence

$$0<x_{n+1}=-1+\sqrt[n]{1+nx_n}\le -1+\sqrt[n]{1+nx_2},$$

and since the right hand side clearly tends to $0$ we obtain $\displaystyle\lim_{n\to\infty}x_n=0.$

$\textbf{2)}$ Now $1/x_n\to +\infty$  increasingly and we can use Stolz theorem as follows:

$$\lim_{n\to\infty}nx_n =\lim_{n\to\infty}\frac{n}{\frac{1}{x_n}}=\frac{(n+1)-n}{\frac{1}{x_{n+1}}-\frac{1}{x_n}}=\lim_{n\to\infty}\frac{x_{n+1}x_n}{x_n-x_{n+1}}$$

Now consider the defining equation. It can be rewritten as $(1+x_{n+1})^n=1+n x_n$ hence $ x_n=x_{n+1}+S/n$ where $\displaystyle S=\sum_{k=2}^n{n\choose k}x_{n+1}^k.$

Thus 

$$\lim_{n\to\infty}\frac{x_{n+1}x_n}{x_n-x_{n+1}}=\lim_{n\to\infty}\frac{x_{n+1}\left(x_{n+1}+\frac{S}{n}\right)}{\frac{S}{n}}=\lim_{n\to\infty}\frac{nx_{n+1}^2}{S},$$

where we have used that $x_{n+1}\to 0$. Now this can be rewritten as follows

$$\lim_{n\to\infty}\frac{nx_{n+1}^2}{S}=\lim_{n\to\infty}\frac{n}{{n\choose 2}+S'},$$

 where $\displaystyle S'=\sum_{k=3}^n{n\choose k}x_{n+1}^{k-2}>0.$ This shows that the last limit is $0$.

$\textbf{3)}$  Apply the Stolz theorem in the very same way as above -


$$\lim_{n\to\infty}n^2x_n =\lim_{n\to\infty}\frac{n^2}{\frac{1}{x_n}}=\frac{(n+1)^2-n^2}{\frac{1}{x_{n+1}}-\frac{1}{x_n}}=\lim_{n\to\infty}(2n+1)\frac{x_{n+1}x_n}{x_n-x_{n+1}}=2\lim_{n\to\infty}\frac{n^2x_{n+1}^2}{S}.$$

For convenience we work with the reciprocal limit 

$$\lim_{n\to\infty}\frac{S}{n^2x_{n+1}^2}=\lim_{n\to\infty}\frac{\sum_{k=2}^n{n\choose k}x_{n+1}^k}{n^2x_{n+1}^2}=\frac{1}{2}+\lim_{n\to\infty}\sum_{k=3}^n{n\choose k}\frac{1}{n^2}x_{n+1}^{k-2}.$$

 We would be done if we prove, that the last limit is $0$.  Denote $A_n=\displaystyle \sum_{k=3}^n{n\choose k}\frac{1}{n^2}x_{n+1}^{k-2}$. Clearly $A_n>0$. Fix $\varepsilon>0$. According to $2)$, for large enough $n$ holds $\displaystyle x_{n+1}<\frac{\varepsilon}{n}.$

Thus (after completing to the binomial formula) $$A_n<\sum_{k=3}^n{n\choose k}\frac{\varepsilon^{k-2}}{n^k}=\frac{-n \varepsilon ^2+2 n \left(\frac{n+\varepsilon }{n}\right)^n-2 n \varepsilon -2 n+\varepsilon ^2}{2 n \varepsilon ^2},$$

hence $$\limsup_{n\to\infty}A_n\le \lim_{n\to\infty} \frac{-n \varepsilon ^2+2 n \left(\frac{n+\varepsilon }{n}\right)^n-2 n \varepsilon -2 n+\varepsilon ^2}{2 n \varepsilon ^2}=\frac{-\varepsilon ^2-2 \varepsilon +2 e^{\varepsilon }-2}{2 \varepsilon ^2}.$$

This is true for arbitrary $\varepsilon>0$. Letting $\varepsilon\to 0$ in the last bound we obtain (using Taylor expansion of the exponent near $0$) that  

$$\lim_{\varepsilon\to 0}\frac{-\varepsilon ^2-2 \varepsilon +2 e^{\varepsilon }-2}{2 \varepsilon ^2}=0,$$

whence  $\displaystyle\limsup_{n\to\infty}A_n=0,$ which finishes the proof.

IMO shortlist, 1998

G7. $ABC$ is a triangle with $\angle ACB = 2 \angle ABC$. $D$ is a point on the side $BC$ such that $DC = 2 BD$. $E$ is a point on the line $AD$ such that $D$ is the midpoint of $AE$. Show that $\angle ECB + 180 = 2 ∠EBC$.

 

Solution.  Put $D$ in the center. We can rewrite everything in terms of $a$ - the number corresponding to $A$ and $b$ - the number corresponding to $B$. Thus $C$ is $-2b$, E is $-a$.
The relation between the angles at $B$ and $C$ is easily reduced to the following equation:
$$\frac{(a-b)^2}{|a-b|^2}\frac{a+2b}{|a+2b|}=\frac{b^3}{|b|^3}.$$
Squaring and representing modules with conjugates, we obtain
$$\frac{(a-b)^2}{(\bar a-\bar b)^2}\frac{a+2b}{\bar a+2\bar b}=\frac{b^3}{\bar b^3}.$$
Clearing the denominator and setting everything to the left we obtain
$$a^3\bar b^3-3a\bar b^3 b^3-\bar a^3b^3+3\bar a\bar b^2 b^3=0\ (*)$$
Similarly, the equality we want to proof is equivalent to
$$\left(\frac{\frac{-b}{|-b|}}{\frac{-a-b}{|-a-b|}}\right)^2=(-1)
\left(\frac{\frac{-a+2b}{|-a+2b|}}{\frac{b}{|b|}}\right)$$
Both of the arguments of the complex numbers on both sides are between 180 and 360, so squaring is equivalent transformation here. As above we obtain
$$\frac{(a+b)^2}{(\bar a+\bar b)^2}\frac{a-2b}{\bar a-2\bar b}=\frac{b^3}{\bar b^3}.$$
Everything on the left and expand - we obtain
$$-a^3\bar b^3+3a\bar b^3 b^3+\bar a^3b^3-3\bar a\bar b^2 b^3=0.$$
But this is just the same as $(*)$.

A problem told by Zhivko Petrov

 The following integral was proposed for homework to Applied Math, by Zhivko Petrov. 

$\textbf{Problem}.$ Evaluate

$$\int_{0}^1\frac{\ln(1-x+x^2)}{x^2-x}\text{d} x.$$

Later I communicated the problem to prof. Gadjev and he proposed a neat solution (to be presented later).

Now we elaborate a solution, on an idea proposed by prof. Babev. I would like to thank the afformentioned people, as well as David Petrov for pointing me to that idea.

$\textbf{Solution}.$ Introduce $$I(y)=\int_0^1\frac{\ln(1-y(x-x^2))}{x^2-x}\text{d}x.$$

We need to find $I(1)$. Clearly $I(0)=0$. Using differentiation under the integral sign we obtain 

$$I'(y)=\int_0^1\frac{1}{1-y(x-x^2)}\text{d}x.$$

Assuming that $y\in [0,1]$, one can easily integrate the last to obtain 

$$I'(y)=\frac{4 \arcsin\left(\frac{\sqrt{y}}{2}\right)}{\sqrt{y(4-y) }}.$$

The  latter is very easy to integrate (for example making the change $y=4t^2$) in order to obtain

$$I'(y)=\left(4 \arcsin\left(\frac{\sqrt{y}}{2}\right)^2\right)'.$$

Thus 

$$I(1)=\int_0^1I'(y)\text{d}y+I(0)= 4 \arcsin\left(\frac{\sqrt{1}}{2}\right)^2-4 \arcsin\left(\frac{\sqrt{0}}{2}\right)^2+0=4 \arcsin\left(\frac{1}{2}\right)^2=\frac{\pi^2}{9}.$$

Property of join

 Let $X$ be path-connected and $Y$ be arbitrary topological space. Then the join $X*Y$ is simply connected.   $\textbf{Proof}.$ We use Van K...