Portal-Zone Gothic-Zone Gothic II-Zone Gothic 3-Zone Gothic 4-Zone Modifikationen-Zone Download-Zone Foren-Zone RPG-Zone Gothic-Almanach Spirit of Gothic

 
 
 
 

Benutzer:Logiker/BAthesis: Unterschied zwischen den Versionen

Aus Gothic Almanach

Wechseln zu: Navigation, Suche

Bild:HilfeOpera.png

K (regex)
 
K (Der Seiteninhalt wurde durch einen anderen Text ersetzt: „$$ hello world $$“)
 
Zeile 1: Zeile 1:
In this chapter we will introduce sequences of simplicial complexes and define function on the, simplicial parameters.
 
Moreover we will explain what we mean by testing a simplicial parameter in constant time.
 
Finally we we will yield an explicit algorithm to test the Euler characteristic
 
and a main theorem for testing simplicial parameters.
 
%This will provide our first example of testing a topological invariant. To be more precise, we do not test topological invariants, but invariants over the number of vertices.
 
%Consequently, our first definition will be specify the class of objects we test.
 
%Let $\tilde \Sigma$ be any of the classes of (not rooted) simplicial complexes defined in the previous chapter.
 
%Let $\tilde\Sigma$ be one of the classes $\tilde\Sigma = \Sigma, \Sigma_O, \Sigma_{|4}$ and $\Sigma_{|4,O}$.
 
 
 
\section{Convergence} \label{sec:convergence}
 
%=================================================================================================
 
Another structure based on simplicial complexes, that we want to construct, are convergent sequences. This will be done by using patterns $\alpha$, i.e. small parts of simplicial complexes.
 
 
%path -> connected vertices -> component
 
% ->B_r
 
 
% 1-skeleton
 
Let $x,y\in V(K)$. A \textbf{path}\index{path} from $x$ to $y$ of \textbf{length}\index{lenght} $l$ from $x =: x_0$ to $y =:x_n$ on a simplicial complex $K$ is a sequence $\{x_n\}_{n=0}^l \subset K_0$ such that $\{x_n, x_{n+1}\}$ is an edge of $K$. This gives rise to the distance\index{distance} map
 
\[d(x,y) = \min \{l \mid \exists \text{ path from $x$ to $y$ of length $l$ }\} \cup \{\infty\}.\]
 
$x$ and $y$ are \textbf{connected}\index{connected!vertices} \IFF $d(x,y)<\infty$.
 
This property induces an equivalence relation on $K$.
 
The equivalence classes of this relation are called \textbf{components}\index{component} of $K$.
 
If there is only one such component, $K$ is \textbf{connected}\index{connected!simplicial complex}.
 
In this case $d$ is a metric.
 
%In case of two subcomplexes $K^1$ and $K^2$ of $K$ we define $d(K^1,K^2)$ as the minimum of $d(x,y)$ with $x\in K^1$ and $y\in K^2$.
 
%adjacent
 
A simplicial complex is \textbf{rooted}\index{rooted simplicial complex} if one of its vertices is labeled.
 
A \textbf{rooted isomorphism}\index{isomorphism!rooted} $\phi:(V,S,p)\to (V',S',p')$ preserves the root,
 
i.e. $\phi(p)=p'$.
 
The rooted \textbf{$r$-ball}\index{ball} in a complex $K=(V,S)$ around $x\in K_0$ is a rooted simplicial complex given by
 
$$B_r(x) := (V', \{ \tau \in S \mid \tau \subset V' \}, x), \text{ where }
 
V' := \{ y\in V \mid d(y,x)\leq r  \}.
 
 
$$
 
$$
 
+
  hello world
The property defined next will be an essential assumption for all further results.
+
\begin{definition}[vertex degree bound]\index{bound of vertex degree|(}
+
The \textbf{vertex degree}\index{vertex!degree} of a point $p$ in a simplicial complex $K$ is the number of edges of $K$ containing $p$. We say that a simplicial complex $K$ has \textbf{bounded vertex degree} $d$
+
if all vertices $p$ of $K$ are of a vertex degree lower or equal to $d$.
+
\index{bound of vertex degree|)}
+
\end{definition}
+
 
+
Let $\Sigma^d$ be the class of simplicial complexes of vertex degree bound $d$.
+
Let $Z_{*}^{d}$ be the set of rooted isomorphism classes of rooted simplicial complexes. On $Z_{*}^{d}$ there is a natural filtration\index{filtration} given by the sets $Z_{*}^{d, r}$ of $r$-balls of $Z_{*}^d$.
+
For any simplicial complex $K$ and any $\alpha\in Z_{*}^{d,r}$ we may define $T_\alpha^r(K)$ as the set of vertices $x\in V(K)$ such that $B_r(x)\simeq \alpha$.
+
Note that $Z_{*}^{d, r}$ is a subset of $Z_{*}^{d, r+1}$ since any $r$-ball can be considered as an $(r+1)$-ball.
+
Now we can calculate the probability of a pattern\index{pattern} $\alpha \in Z^{d,r}_*$ to occur on a certain point of $K\in\Sigma^d$ as
+
 
$$
 
$$
p_\alpha^r(K) := \frac{\# T_\alpha^r(K)}{\# V(K)}.
 
$$
 
 
It will turn out to be more convenient to enumerate all $p_\alpha^r(K)$ for fixed $d$ with indices in $\N$
 
\begin{equation}\label{eq:list}
 
p_0 = p_{\alpha_0}^{r_0}, \;\: p_1 = p_{\alpha_1}^{r_1},\;\: p_2 = p_{\alpha_2}^{r_2},\;\: \ldots
 
\end{equation}
 
such that $r_0 \leq r_1 \leq r_2 \leq \ldots$ and $\alpha_n \in Z_*^{d,r_n}$ for all $n\in\N$.
 
 
\begin{definition}\label{def:convergence}\index{convergence!of simplicial complexes}
 
A sequence $\{K^n\}_{n=0}^\infty \subset \Sigma^d$ is \textbf{convergent} \IFF
 
\begin{enumerate}
 
\item $\# V(K^n) \to \infty$ for $n\to\infty$
 
\item for all $m\in\N$ the probabilities $p_m(K^n)$ converge for $n\to\infty$.
 
\end{enumerate}
 
\end{definition}
 
 
\begin{remark}[convergence in graph theory\index{convergence!of graphs}]
 
This definition of convergence in inspired by the convergence of graphs.\cite{BCLSSV05}
 
The one skeleton of convergent simplicial complex converges considered as a graph.
 
\end{remark}
 
 
\begin{definition}[global orientation]\index{orientation!global}
 
For $i\geq 1$ the orientations of two $i$-simplices with a common $(i-1)$-simplex $\sigma$ are called
 
\textbf{compatible}\index{orientation!compatible}, if the induced orientations on $\sigma$ are different.
 
A simplicial complex $K$ of dimension $n$ is called globally oriented
 
if all $n$-simplices are equipped with orientations that are compatible with one another.
 
\end{definition}
 
Especially, if $K$ is a triangulation of an oriented manifold\index{manifold}\index{orientation!of a manifold},
 
there is a  unique compatible orientation of all $n$-simplices of $K$ \cite[9.7.2]{StZ88}.
 
 
\index{pattern!classes of ---s|(}
 
All definitions introduced so far in this section, e.g. rooted complex, can naturally established for oriented complexes.
 
In order to test homological invariants, we will consider various classes of simplicial complexes, namely
 
\begin{itemize}
 
\item $\Sigma$, the class of all (not oriented) (finite) simplicial complexes, commonly denoted by $K$.
 
\item $\Sigma_O$, the class of all (finite) simplicial complexes endowed with an orientation of all simplices,
 
commonly denoted by $Q$. (see \ref{def:orientation})
 
\item $\Sigma_{|4}$ the class of all triangulations of oriented $4k$-dimensional manifolds,
 
those $4k$-simplices are compatibly oriented but other simplices are not oriented.
 
Elements of this class will commonly be denoted by $K$.
 
\item $\Sigma_{|4, O}$ the class of all oriented triangulations, commonly denoted by $Q$,
 
of oriented $4k$-dimensional manifolds,
 
those $4k$-simplices are compatibly oriented.
 
\end{itemize}
 
Let $\tilde \Sigma$ be any of the classes $\Sigma$, $\Sigma_O$, $\Sigma_{|4}$ and  $\Sigma_{|4, O}$.
 
We define further
 
\begin{itemize}
 
\item $\tilde\Sigma^d\subset \tilde \Sigma$ the class of all complexes of $\tilde\Sigma$ of vertex degree bound $d$.
 
\item $\tilde Z^d_{*}$ the class of all rooted isomorphism classes of rooted simplicial complexes of $\tilde \Sigma^d$
 
\item $\tilde Z^{d,r}_{*}$ the class of all elements of $\tilde Z^d_{*}$ of radius not greater than $r$,
 
commonly denoted by $\alpha$.
 
The radius is defined as maximal distance to the root.
 
\end{itemize}
 
\index{pattern!classes of ---s|)}
 
 
Like above, we define $T_\alpha^r(K)$ for any $K\in\tilde\Sigma^d$ and $\alpha\in\tilde Z_{*}^d$ as the set of vertices $x\in V(K)$ such that $B_r(x) \simeq \alpha$ \wrt the isomorphisms of the class $\tilde \Sigma^d$.
 
Moreover, for every class $\tilde\Sigma^d$ we define the probability
 
$$
 
p_\alpha^r(K) := \frac{\# T_\alpha^r(K)}{\# V(K)}.
 
$$
 
like in the case $\tilde\Sigma^d = \Sigma^d$.
 
For every filtration of a class of patterns there is as well an enumeration $p_0, p_1, \ldots$ like in \eqref{eq:list} such that $r_0 \leq r_1 \leq r_2 \leq \ldots$.
 
Thus we yield definitions of convergence of a sequence in all classes $\tilde\Sigma^d$,
 
analogous to \ref{def:convergence}\index{convergence!of simplicial complexes}.
 
 
The concept of $r$-balls and convergence can be extended to $\sigma$-balls where $\sigma$ is any simplex:
 
The \textbf{$r$-ball around a simplex}\index{ball!around a simplex} $\sigma\in K$
 
where $K\in \tilde\Sigma$ is given by
 
$$
 
B_r(\sigma) := (V', \{ \tau \in K \mid \tau \subset V' \}, x), \text{ where }
 
V' := \{ y\in V \mid \exists x \in \sigma:  d(y,x)\leq r  \}.
 
$$
 
The ball of radius 1 we call the \textbf{adjacent ball}\index{ball!adjacent} $\adj \sigma$.
 
Let $\tilde Z_{\blacktriangle}^{d}$ be the set of rooted isomorphism classes of rooted simplicial complexes.
 
\index{pattern!with simplicial root}
 
On $\tilde Z_{\blacktriangle}^{d}$ there is a natural filtration
 
given by the sets $\tilde Z_{\blacktriangle}^{d, r}$ of $r$-balls of $\tilde Z_{\blacktriangle}^d$.
 
For any simplicial complex $K$ and any $\alpha\in \tilde Z_{\blacktriangle}^{d,r}$ we may define $T_\alpha^r(K)$ as the set of simplices $\sigma\in K$ such that $B_r(\sigma)\simeq \alpha$.
 
Now we can calculate the probability of a pattern $\alpha \in \tilde Z^{d,r}_\blacktriangle$ to occur on a certain point of $K\in\tilde\Sigma^d$ as
 
$$
 
p_\alpha^r(K) := \frac{\# T_\alpha^r(K)}{\# V(K)}.
 
$$
 
\index{convergence!of patterns with simplicial root|(}
 
It is possible to reduce the convergence of patterns with simplicial root to convergence of patterns with vertices root:
 
 
\begin{proposition}\label{prop:convergenceSimplices}
 
A sequence $\{K^n\}_{n=0}^\infty \subset \tilde\Sigma^d$ is \textbf{convergent} \IFF
 
\begin{enumerate}
 
\item $\# V(K^n) \to \infty$ for $n\to\infty$
 
\item $p_\alpha^r(K^n)$ converges for all $r\in\N$ and for all $\alpha\in\tilde  Z_\blacktriangle^{d}$.
 
\end{enumerate}
 
\end{proposition}
 
 
\begin{proof}
 
The reverse direction is obvious since vertices are $0$-simplices.
 
 
For the other direction only (ii) has to be proven.
 
Given any $r\in\N$ and $\alpha\in \tilde Z_\blacktriangle^{d,r}$
 
such that the root $\sigma$ of $\alpha$ is an $i$-simplex.
 
Let $\alpha^x$ where $x\in \Root \alpha$ be the ball with a root point obtained from $\alpha$
 
by replacing the root simplex by a root point of the root simplex.
 
\begin{IEEEeqnarray*}{rCl}
 
p_\alpha^r (K^n) &=& \frac{T_\alpha^r(K^n)}{\# K^n_0}  \\
 
&=& \frac{1}{\# K^n_0}\frac{1}{\# \sigma} \sum_{x\in\sigma\subset\alpha}
 
\sum_{\substack{\beta\in Z_*^{d,r+1}\\ \exists \iota:\alpha^x \hookrightarrow \beta }}
 
T_\beta^{r+1}(K^n) \\
 
&=&\frac{1}{i+1} \sum_{x\in\sigma\subset\alpha}
 
\sum_{\substack{\beta\in \tilde Z_*^{d,r+1}\\ \exists \iota: \alpha^x \hookrightarrow \beta }}
 
p_\beta^{r+1}(K^n)
 
\end{IEEEeqnarray*}
 
for all $n\in\N$, where $\iota$ denotes a rooted embedding.
 
As a finite sum of convergent parameters, $p_\alpha^r (K^n)$ converges.
 
\end{proof}
 
\index{convergence!of patterns with simplicial root|)}
 
 
%\subsection{Connection between oriented and not not oriented complexes}
 
Obviously, if an oriented sequence $\{Q_n\}\subset \Sigma^d_O, \Sigma_{|4,O}^d$ converges, the corresponding not oriented sequence does it too. As we will see, for the contrary direction a similar statement holds. This enables us to switch among the classes of patterns we introduced. Note that one would be able to avoid to do so by equipping all complexes we observe with an orientation. The proof of the following proposition bases upon a technique used by Elek in \cite[prop. 2.2]{Elek10R}.
 
\begin{proposition}\label{prop:orientedSequ}
 
Let $\{K^n\}_{n=0}^\infty \subset \Sigma^d$ be a convergent sequence,
 
then there exists an oriented convergent sequence $\{Q^n\}_{n=0}^\infty \subset \Sigma^d_O$
 
such that$\{K^n\}_{n=0}^\infty$ is obtained from $\{Q^n\}_{n=0}^\infty$ by forgetting about orientation.
 
\end{proposition}
 
 
\begin{lemma}\label{lem:orientedSequ}
 
We keep the assumptions of the proposition and fix $\alpha\in Z_{*}^{d,r}$.
 
There is a natural number $l$ depending only on $r$ and $d$
 
and a partition $\bigsqcup_{i=1}^l B_i^n = T^r_\alpha(K^n)$ for all $n \geq 0$
 
such that for $x\neq y$ in $B_i^n$ the distance of $x$ and $y$ is greater than $2r$.
 
\end{lemma}
 
 
\begin{proof}\index{coloring|(}
 
We construct a complex $H^n$ on the vertex set $V(K^n)$
 
containing all edges $(x,y)$ with $0\neq d(x,y) \leq 2r$.
 
The obtained complex has vertex degree bound
 
$d + d(d-1) + \ldots + d(d-1)^{r-1} \leq d \sum_{k=0}^{r-1}d^k + d = d^{r+1}.$
 
Let $l:=d^{r+1}+1$.
 
The following algorithm colors $H^n$ by $l$ different colors:
 
Start: Give any vertex any color.
 
Iterate for every vertex left: Due to vertex degree bound for any uncolored vertex $y$ there is a color such that all colored vertices that are adjacent to $y$ have a different color; give $y$ one of those colors.
 
Let $B_i^n$ be the vertices of $T_\alpha^r(K^n) \subset V(H^n)$ colored by the $i$-th color.
 
\index{coloring|)}
 
\end{proof}
 
 
\begin{proof}[Proof of \ref{prop:orientedSequ}]\index{random!variable|(}
 
The concept is to show that a randomly chosen orientation of $\{K^n\}_{n}$ almost surely converges.
 
By $\{\sigma, -\sigma\}$ we denote the set of orientations of a simplex $\sigma\in K^n$.
 
Let $\{\sigma, -\sigma\}$ be a Bernoulli space\index{Bernoulli space},
 
i.e. a probability space\index{probability space} with the uniformly distributed probability measure
 
$\probability(\sigma) = \probability(-\sigma) = \frac{1}{2}$.
 
Now we introduce the Bernoulli space of $\{K^n\}_n$ defined as
 
\[\Omega := \prod_{n=0}^\infty \prod_{\sigma \in K} \{\sigma, -\sigma\}\]
 
endowed with the product measure.
 
Every element $\kappa$ of $\Omega$ represents an orientation of all complexes $\left\{K^n\right\}_n$
 
yielding an oriented sequence $\{Q^n_\kappa\}_n$.
 
 
Given $r$ and $\alpha \in Z_{*,O}^{d, r}$ corresponding to $\beta \in Z_{*}^{r,d}$. For every $n\in\N$ and $x\in K^n_0$ there is a natural random variable on $\Omega$
 
$$X^n_x(\kappa) := \begin{cases}
 
1 & \text{if } B_r(x) \subset Q^n_\kappa \text{ coincides with } \alpha \\
 
0 & \text{otherwise.}
 
\end{cases}$$
 
Likewise, we define
 
$$\Omega_\alpha := \prod_{\sigma\in\alpha} \{\sigma, -\sigma\} \quad \text{and} \quad
 
X_{\Root \alpha} (\kappa) := \begin{cases}
 
1 & \text{if } \beta_\kappa \text{ coincides with } \alpha \\
 
0 & \text{otherwise.}
 
\end{cases}$$
 
on $\Omega_\alpha$.
 
Let $p_\beta^r$ be $\lim_{n\to\infty} p_\beta^r(K^n)$.
 
We will proof that for almost all $\kappa\in \Omega$
 
\begin{equation}\label{orientedSequ}
 
p_\alpha^r(Q^n_\kappa)
 
\stackrel{\text{def}}{=} \frac{1}{\# K^n_0} \sum_{x\in K^n_0} X^n_x (\kappa)
 
\stackrel{n\to\infty}{\longrightarrow}
 
% \frac { \# \{ \text{representatives of }\alpha\}}
 
% { 2^{ \# \{ \text{simplices of } \alpha \}} } 
 
% p_\beta^r
 
\expectVal X_{\Root \alpha} \cdot p_\beta^r =: \mu p_\beta^r
 
\end{equation}
 
where $E X_{\Root \alpha}$ denotes the expected value\index{expected value}. This will proof our proposition since the intersection of countable many sets with measure 1 has measure 1 again, and there are only countable many combinations of $r$ and $\alpha$.
 
 
We choose $l$ and $B_i^n\subset T_\alpha^r(K^n)$ according to lemma \ref{lem:orientedSequ}.
 
\Wlog we may assume, that $\# B_1^n\geq\# B_2^n\geq\ldots$ for all $n\in \N$.
 
Fix $q\in\N$. Let
 
$$l(q,n) := \max \left\{ i \mid \# B_i^n \geq \frac{2^{-q}}{l} \# K_0^n\right\}
 
\quad\text{and}\quad
 
l(q) := \limsup_{n\to\infty} l(q,n).$$
 
We consider now the sequences $\{\tilde B_1^k\}$,\ldots, $\{\tilde B_{l(q)}^k\}$ for $k$ from 1 to $\infty$ with $\tilde B_i^k := B_i^{n(k)}$ where $n(k)$ is the index of the $k$-th element of $\{B_i^n\}_n$ containing at least $(2^{-q}/l) \# K_0^n$ vertices. In other words, at most $2^{-q}$ of the vertices of each simplex $K^n$ are not contained in
 
$\tilde B^n := \bigsqcup_{i=1}^{l(p,n)} B_i^n \subset \bigsqcup_{i,k} \tilde B_i^k$. The whole construction on the partition can be visualized \cite{Wh}:
 
 
\begin{center}\begin{tikzpicture}[xscale=0.3, yscale=-.3]
 
%lattice:
 
\draw[step=1, help lines] ( 0  ,0) grid (20  ,12);
 
\draw[step=1, help lines, dashed] (20  ,0) grid (21.8,12);
 
\draw[step=1, help lines, dashed] (23.2,0) grid (25  ,12);
 
\draw[step=1, help lines] (25  ,0) grid (30  ,12);
 
\draw[step=1, help lines, dashed] (30  ,0) grid (31.8,12);
 
 
%$\bigsqcup_{i,k} \tilde B_i^k$-volumne
 
\fill[black!20, opacity=0.5] ( 0,9) -- (1,9) -- (1,2)
 
-- (2,2) -- (2,9)
 
-- (4,9) -- (4,7)
 
-- (5,7) -- (5,0)
 
-- (6,0) -- (6,8)
 
-- (7,8) -- (7,7)
 
-- (8,7) -- (8,8)
 
-- (9,8) -- (9,4)
 
-- (10,4)--(10,9)
 
-- (11,9)--(11,8)
 
-- (12,8)--(12,10)
 
-- (13,10)--(13,8)
 
-- (15,8)--(15,2)
 
-- (16,2)--(16,6)
 
-- (17,6)--(17,5)
 
-- (18,5)--(18,7)
 
-- (19,7)--(19,5)
 
-- (20,5)
 
-- (20,12) -- (0,12) -- cycle;
 
\fill[black!20, opacity=0.5](25,7) -- (26,7)--(26,4)
 
-- (27,4)--(27,6)
 
-- (28,6)--(28,7)
 
-- (29,7)--(29,6)
 
-- (30,6)
 
-- (30,12) -- (25,12)-- cycle;
 
\draw[] (17,10) node {$\bigsqcup_{i,k} \tilde B_i^k$};
 
\shade[left color=gray!20,right color=gray!0, opacity=0.5, rounded corners]
 
(20,6) -- (21.8,6) -- (21.8,12) -- (20,12) -- cycle;
 
\shade[left color=gray!20,right color=gray!0, opacity=0.5, rounded corners]
 
(30,6) -- (31.8,6) -- (31.8,12) -- (30,12) -- cycle;
 
\shade[right color=gray!20,left color=gray!0, opacity=0.5, rounded corners]
 
(25,8) -- (23.2,8) -- (23.2,12) -- (25,12) -- cycle;
 
 
%$\tilde B^2$        ========================================
 
\filldraw[fill=black!50, opacity=0.5] (1,2) -- (2,2) -- (2,12) -- (1,12) -- cycle;
 
\draw        (1.5,1) node {$\tilde B^2$};
 
 
%$\tilde B_i^3$        ========================================
 
\draw[]  (0,6) node[left] {$i$};
 
\draw[-]  (-.2,6) -- (10,6);
 
\filldraw[fill=black!50, opacity=0.5] (9,6) -- (10,6) -- (10,7) -- (9,7) -- cycle;
 
\draw        (10,6.5) node[right] {$\tilde B_i^3$};
 
 
%axis
 
\draw[->] (0,12) --node[above] {$n$} (5,12);
 
 
%$l(q)$-line
 
\draw[]  (0,4) node[left] {$l(q)$};
 
\draw[-]  (-.2,4) -- (20,4);
 
\draw[dashed] (20,4) -- (21.8,4);
 
\draw[dashed] (23.2,4) -- (25,4);
 
\draw[-]  (25,4) -- (30,4);
 
\draw[dashed] (30,4) -- (31.8,4);
 
 
%$l(q,n)$-line
 
\draw[]  (0,9) node[left] {$l(q,n)$};
 
\draw[-] (-.2,9) -- (1,9) -- (1,2)
 
-- (2,2) -- (2,9)
 
-- (4,9) -- (4,7)
 
-- (5,7) -- (5,0)
 
-- (6,0) -- (6,8)
 
-- (7,8) -- (7,7)
 
-- (8,7) -- (8,8)
 
-- (9,8) -- (9,4)
 
-- (10,4)--(10,9)
 
-- (11,9)--(11,8)
 
-- (12,8)--(12,10)
 
-- (13,10)--(13,8)
 
-- (15,8)--(15,2)
 
-- (16,2)--(16,6)
 
-- (17,6)--(17,5)
 
-- (18,5)--(18,7)
 
-- (19,7)--(19,5)
 
-- (20,5) -- (20,6);
 
\draw[dashed] (20,6) -- (21.8,6);
 
\draw[dashed] (23.2,8) -- (25,8);
 
\draw[-]  (25,8) -- (25,7) -- (26,7)--(26,4)
 
-- (27,4)--(27,6)
 
-- (28,6)--(28,7)
 
-- (29,7)--(29,6)
 
-- (30,6);
 
\draw[dashed] (30,6) -- (31.8,6);
 
\end{tikzpicture}\end{center}
 
 
 
 
 
 
Since for $x,y\in\bigsqcup_{n=0}^\infty B_i^n$ the $r$-neighborhoods are disjoint for all $i$, the random variables $X_x^n$ with $x\in\bigsqcup_n B_i^n$ are i.i.d.
 
Hence the strong law of the large number\index{law of the large number, strong} holds
 
\cite[thm. 11.1.1. on p.~386]{Dud04}, i.e. for all $i = 1, \ldots, l(q)$
 
$$\lim_{k\to\infty} \frac{ \sum_{x\in \tilde B_i^k} X_x^{n(k)} (\kappa) }{\# \tilde B_i^k}
 
= \expectVal X_{\Root \alpha} \stackrel{\text{def}}{=} \mu$$
 
for almost all $\kappa\in\Omega$. Hence
 
$$
 
\lim_{n\to\infty} \frac{ \sum_{x\in \tilde B^n} X_x^n (\kappa)  }{ \# \tilde B^n }
 
= \lim_{n\to\infty} \sum_{i=1}^{l(p,n)} \frac{\# B_i^n}{\sum_{j=1}^{l(q,n)} \# B_j^n}
 
\frac{ \sum_{x\in B_i^k} X_x^n (\kappa)}{\# B_i^n}
 
%+ \sum_{i=l(p,n)+1}^{l(p)} 0 \cdot N
 
= \mu
 
$$
 
for almost all $\kappa\in\Omega$.
 
Let $T^n := T_\beta^r(K^n)$ and $\overline{\lim} := \limsup\limits_{n\to\infty}$.
 
By definition we have
 
\begin{equation*}
 
\limsup\limits_{n\to\infty} \left|p_\alpha^r(Q^n_\kappa) - \mu p_\beta^r \right|
 
= \overline{\lim} \left|\frac{\# T^n}{\# K^n_0} \cdot
 
\frac {  \sum_{x\in T^n} X_x^n (\kappa)  }{\# T^n}
 
- p_\beta^r \cdot \mu \right|.
 
\end{equation*}
 
$p_\beta^r = \lim_{n\to\infty} \# T^n/\# K^n_0$ factors out. Since the probability $p_\beta^r$ is not greater than one, we observe
 
\begin{IEEEeqnarray*}{rCl}
 
\overline{\lim} \left|p_\alpha^r(Q^n_\kappa) - \mu p_\beta^r \right|%} \\
 
&\leq& \overline{\lim} \left|
 
%\frac {  \sum_{ x\in\tilde B^n, x\in T^n\setminus\tilde B^n} X_x^n (\kappa) }
 
\frac {  \sum_{ x\in\tilde B^n} X_x^n (\kappa)
 
+  \sum_{ x\in T^n\setminus\tilde B^n} X_x^n (\kappa) }
 
{\# T^n}
 
- \mu \right| \\
 
&\leq& \overline{\lim} \left|\frac{ \sum_{x\in\tilde B^n} X_x^n (\kappa) + t}{\# T^n} - \mu \right|
 
+ \left| \frac{\sum_{ x\in T^n\setminus\tilde B^n} X_x^n (\kappa) - t}{ \# T^n} \right| \\
 
&\leq& \overline{\lim} \left|\frac{ \sum_{x\in\tilde B^n} X_x^n (\kappa) + t}
 
{\# \tilde B^n + \# \left(T^n\setminus\tilde B^n\right)}
 
- \mu \right|
 
+ \left| \frac{\# \left(T^n\setminus\tilde B^n\right)}{ \# T^n} \right|
 
\end{IEEEeqnarray*}
 
for all $t\in \left[0,\# \left(T^n\setminus\tilde B^n\right)\right]$. By adequate choice of $t$ we obtain
 
\begin{IEEEeqnarray*}{rCl}
 
\limsup_{n\to\infty} \left|p_\alpha^r(Q^n_\kappa) - \mu p_\beta^r \right|
 
&\leq& \overline{\lim} \left|\frac{ \sum_{x\in\tilde B^n} X_x^n (\kappa)  }
 
{ \# \tilde B^n }
 
- \mu \right|
 
+ \left| \frac{\# \left(T^n\setminus\tilde B^n\right)}{ \# T^n} \right| \\
 
&\leq& 0 + 2^{-q}.
 
\end{IEEEeqnarray*}
 
for almost all $\kappa\in\Omega$.
 
Hence by $q\to\infty$ \eqref{orientedSequ} is proven. Thus our proposition holds.
 
\index{random!variable|)}
 
\end{proof}
 
 
\begin{proposition}\label{prop:orientedSequGlobal}
 
Let $\{K^n\}_{n=0}^\infty \subset \Sigma^d_{|4}$ be a convergent sequence,
 
then there exists an oriented convergent sequence $\{Q^n\}_{n=0}^\infty \subset \Sigma^d_{|4,O}$
 
such that $\{K^n\}_{n=0}^\infty$ is obtained from $\{Q^n\}_{n=0}^\infty$
 
by forgetting about orientation of simplices of dimension less than $\dim K^n$ for all $n$.
 
\end{proposition}
 
 
\begin{proof}
 
Replace $\Omega$ in the proof of the previuous proposition by
 
\[
 
\Omega := \prod_{n=0}^\infty \prod_{\sigma \in K^n\setminus K^n_{ \dim K^n}} \{\sigma, -\sigma\}
 
.\qedhere\]
 
\end{proof}
 
 
 
\section[Testability]{Definition of Simplicial Parameter and Testability}%------------------------------------------
 
 
\begin{definition}[simplicial parameter\index{simplicial!parameter}]
 
A function from a set of simplicial complexes $S \subset \tilde \Sigma$ to the real numbers $\R$ is called a simplicial parameter.
 
\end{definition}
 
This definition is analogous to the term graph parameter\index{graph parameter}
 
in graph theory \cite[sec. 1.1]{Elek10R}.
 
We proceed with the class of simplicial parameters that will be of interest in this thesis.
 
It is derived from the term of convergence introduced in the previous chapter.
 
\begin{definition}[convergence of simplicial complexes\index{convergence!simplicial paramter}]
 
We say that a simplicial parameter $\varphi$ on $S\subset\tilde\Sigma$ converges
 
if for all $d\in \N$ and any convergent sequence $\{K^n\}_{n=0}^\infty$ in $S \cap \tilde\Sigma^d$ the limes
 
$\lim_{n\to\infty} \varphi(K^n)$ exists.
 
\end{definition}
 
 
The probabilities $p_\alpha^r(\sigma)$ of the occurrence of $r$-balls $\alpha$ around simplices $\sigma$ are convergent simplicial parameters due to proposition \ref{prop:convergenceSimplices}.
 
Another easy but, as we will see, quite useful example of a convergent parameter is given by the densities of $i$-simplices $\rho_i$.
 
 
\begin{lemma}\label{lem:convNumb_iSimpl}
 
For all $i\geq 0$ the \textbf{simplex densities}\index{simplex!density} $\rho_i: K \mapsto(\# K_i) / (\# K_0)$,
 
where $K\in\tilde\Sigma^d$, converge for all $i=1,\ldots, d$.
 
\end{lemma}
 
 
\begin{proof}
 
Let $\{K^n\}_{n=0}^\infty$ be a convergent sequence in $\Sigma^d$.
 
Let $S_i(\alpha)$ be the number of $i$-simplices
 
that contain the root of $\alpha \in Z_{*}^{d,1}$ as a vertex.
 
Since every $i$-simplex owns $i+1$ vertices we have the equation
 
$$(i+1) \# K^n_i
 
= \sum_{x\in K^n} S_i(\adj x) .
 
$$
 
Therefore, $(i+1)\rho_i(K^n_0)$ equals
 
$$
 
\frac{1}{\# K_0^n} \sum_{x\in K^n_0} S_i(\adj x)
 
= \frac{1}{\# K_0^n} \sum_{\alpha\in Z_{*}^{d,1}} S_i(\alpha) \# T_{\alpha}^1(K^n)
 
= \sum_{\alpha\in Z_{*}^{d,1}} S_i(\alpha) p_{\alpha}^1(K^n).
 
$$
 
This converges since $\{p_{\alpha}^1(K^n)\}_n$ converges for all $\alpha\in Z_{*}^{d,1}$ and $Z_{*}^{d,1}$ is finite. Hence $\rho_i(K^n_0)$ converges.
 
\end{proof}
 
 
%\begin{definition}
 
% The Euler characteristic $\chi(K)$ of a simplicial complex $K\in\Sigma$ is give by
 
% \[ \chi(K) := \# K_0 - \# K_2 +  \# K_3 \mp \ldots \]
 
%\end{definition}
 
 
%\begin{proposition}\label{prop:convEuler}
 
% The simplicial parameter $\varphi_\chi: K \mapsto \chi(K) / (\# K_0)$ on $\Sigma^d$ converges.
 
%\end{proposition}
 
%
 
%\begin{proof}
 
% Let $\{K^n\}_{n=0}^\infty$ be a convergent sequence in $\Sigma^d$. Due to vertex degree bound
 
% \[\varphi_\chi(K^n) = 1 - \varphi_1(K^n) + \varphi_2(K^n) - \varphi_3(K^n) \pm \ldots + (-1)^d \varphi_d(K^n)\]
 
% for all $n\in\N$
 
% As a finite sum of convergent sequences $\{\rho_i(K^n)\}_{n=0}^\infty$,
 
% the sequence $\{\varphi_\chi(K^n)\}_{n=0}^\infty$ converges.
 
% Hence $\varphi_\chi$ converges by definition.
 
%\end{proof}
 
 
\begin{definition}[random sampling] \index{random!sampling}\index{sampling, random}
 
Let $r$ and $N$ be some natural numbers; let $K\in \tilde \Sigma$ be any simplicial complex.
 
A $(r,N)$-random sampling $s$ from $\tilde\Sigma$ to $(\tilde Z_{*}^{r,d})^N$ equipped with the product measure of $P_K$ is given by
 
\[s := s_{x_{1,K}, \ldots, x_{N, K}}:  K \mapsto (B_r (x_1), \ldots, B_r (x_N )) \]
 
for some elements $x_{1,K}, \ldots, x_{N, K} \in K \in\tilde\Sigma$.
 
\end{definition}
 
Note that $x_1, \ldots, x_N$ are not necessarily different.
 
 
\begin{definition}\label{def:samplingMeasure}\index{measure!sampling ---}
 
Let $K_0$, where $K\in\tilde \Sigma$, be endowed with the normalized counting measure,
 
i.e. any subset $A\subset K_0$ has measure $\# A / \# K_0$.
 
Let $Z_{*}^{d,r}$ be endowed with the measure $P_K$ induced by the map
 
$$ \left(K_0, A \mapsto \frac{\# A}{\# K_0} \right)
 
\rightarrow  \left( Z_{*}^{d,r}, P_K\right), %\Span\left\{\{\alpha\} \mapsto p_\alpha^r(K)\right\} \right),
 
\qquad
 
x \mapsto B_r(x).
 
$$
 
$P_K$ is called the sampling measure.
 
\end{definition}
 
 
Note that $P_K$ can be calculated easily for any $A\subset Z_{*}^{d,r}$ by
 
$P_K(A) = \sum_{ A} p_\alpha^r(K)$.
 
In particular, it is a probability measure\index{probability space} again since $P_K(Z_{*}^{d,r})=1$.
 
 
\begin{definition}[testable in constant time]\index{testable in constant time}
 
Let $\varphi$ be a simplicial parameter that is defined on $S\subset\tilde\Sigma$.
 
A simplicial parameter is testable in constant time
 
iff for every $d$ and $\varepsilon >0$ there exist $r$, $N$ and a tester $\tau := \tau^{r,N}$
 
\[\tau: (Z_{*}^{r,d})^N \to \R\]
 
such that for all $K\in\tilde\Sigma^d$
 
the probability
 
$$P_K\left( \left|\tau^N - \varphi(K)\right| > \varepsilon \right) < \varepsilon
 
\qquad \forall K\in S \cap \tilde\Sigma^d .
 
$$
 
\end{definition}
 
 
In terms: the probability, that for a random sampling $s$ the tester $t:=\tau \circ s$ returns a guess those deviation is greater than $\varepsilon$, is smaller than $\varepsilon$.
 
 
\section{Some Probability Theory}%===============================================================================
 
 
As we strive probability theory, we need some more advanced tool.
 
It is found in Chebyshev's inequality proven in \cite[8.3.1 on p.~261]{Dud04}.
 
 
\begin{theorem}[Chebyshev's inequality]\label{thm:Chebyshev}\index{Chebyshev's inequality}
 
For any real random variable $X$ and $\varepsilon > 0$
 
\[ \probability(|X| \geq \varepsilon ) \leq \frac{\expectVal X^2}{\varepsilon^2}.\]
 
\end{theorem}
 
 
Let $\set{n}:= \{0,\ldots, n-1\}$ for all $n\in \N$.
 
An $M$-dimensional random vector\index{random!vector} is a map $X:\Omega\to \R^{\set{M}}$ on a probability space $\Omega$ such that the projections $X_m:\Omega \to \R$ are random variables, i.e. measurable, for all $n\in\set{M}$.
 
$\R^{\set{M}}$ is endowed with the Euclidean norm $|.|_2$
 
and the standard scalar product $\left\langle.,.\right\rangle$.
 
 
\begin{corollary}\label{cor:Chebyshev}
 
Let $X^1,\ldots,X^N$  be independent $M$-dimensional random vectors
 
such that they are bounded by $1\geq |X^n|_2$
 
and their expected values $\expectVal X^n := (\expectVal X^n_m)_{m\in\set{M}}$
 
\index{expected value!of a random vector} are equal,
 
i.e. $\mu := \expectVal X^1 = \ldots = \expectVal X^N$.
 
Then
 
\[ \probability( |X -  N \mu|_2 \geq \varepsilon N) \leq \frac{4}{\varepsilon^2 N} \]
 
where $X := \sum_{n=1}^N X^n$.
 
\end{corollary}
 
 
\begin{proof}
 
Let $Y^n := X^n - \mu = X^n - \expectVal X^n $ for all $n=1,\ldots, N$ and
 
$Y := X - N \mu = \sum_{n=1}^N Y^n$.
 
Note that $\expectVal Y^n = \vec 0$ for all $n = 1, \ldots N$. Expand
 
$$ \expectVal |Y|^2 = \expectVal \left\langle \sum_{n=1}^N Y^n, \sum_{n=1}^N Y^n\right\rangle
 
=  \sum_{n=1}^N \expectVal \left\langle Y^n, Y^n \right\rangle
 
+ 2 \sum_{ \substack{ n,\nu = 1 \\n\neq \nu}}^N \expectVal \left\langle Y^n, Y^\nu \right\rangle.  $$
 
Observe that $Y^n_m$ and $Y^{\nu}_m$ are independent for $n\neq \nu$ with $n,\nu=1,\ldots, N$ and $m\in\set{M}$.
 
Hence, using the familiar multiplicative property
 
$\expectVal (\tilde X \tilde Y) = \expectVal \tilde X \expectVal \tilde Y$
 
of independent random variable $\tilde X$ and $\tilde Y$,
 
$\expectVal \left\langle Y^n, Y^\nu \right\rangle
 
= \langle\expectVal Y^n, \expectVal Y^\nu \rangle = \langle \vec 0, \vec 0\rangle $.
 
Thus
 
$$ \expectVal Y = \sum_{n=1}^N \expectVal \left\langle Y^n, Y^n \right\rangle
 
= \sum_{n=1}^N \expectVal | X^n - \mu|_2^2. $$
 
Since $|X^n|\leq 1$ and $|\mu| \leq 1$ we yield the bound $\expectVal | X^n - \mu|_2^2 \leq 4 $.
 
Hence $\expectVal Y \leq N $. Applying Chebyshev's inequality we yield
 
$$ \probability(|Y|_2 \geq \varepsilon) \leq \frac{\expectVal |Y|_2^2}{\varepsilon^2}
 
\leq \frac{4 }{\varepsilon^2 N}$$
 
for all $\varepsilon>0$. But this is exactly the corollary.
 
\end{proof}
 
 
%\subsection{Euler-Characteristik}
 
 
\begin{proposition}
 
Fix a vertex degree bound $d$.
 
The simplex densities $\rho_i$ are testable simplicial parameters on $\tilde\Sigma^d$ for all $i=0,\ldots,d$.
 
The tester has to consider only adjacent balls.
 
\end{proposition}
 
 
\begin{proof}
 
Fix $i\in \N$.
 
Instead of testing $\rho_i$ it is sufficient to test $\tilde \rho_i := (i+1) \rho$.
 
$\tilde\rho_i$ tells us in how many $i$-simplices a vertex is contained in average.
 
Given $\varepsilon>0$.
 
Let $X_n$ for $n=1,\ldots, N$ be the random variable
 
\[ X_n:  Z_{*}^{d,1} \to \R, \quad \alpha \mapsto \# \alpha_i\]
 
where $\alpha_i$ denotes the set of $i$-simplices in $\alpha$ that contain the root of $\alpha$.
 
Let $s:\Sigma^d\to (Z_{*}^{d,1})^N$ be a random sampling.
 
Let the random variable
 
$$X^{(N)} = X^1 + \ldots, X^N:(Z_{*}^{d,1})^N\to \R, \quad
 
(\alpha^1,\ldots, \alpha^N) \mapsto \#\alpha_i^1 + \ldots + \#\alpha_i^N
 
$$
 
be the number of $i$-simplices that are contained in $\alpha^1, \ldots, \alpha^N$.
 
In other terms $X^N/N$ makes a guess of $\tilde\rho_i$.
 
$X_n$ is bounded by $C:=\binom{d}{i}$ for all $n=1,\ldots, N$
 
since every $i$-simplex in $\alpha_i \in Z_*^{d,1}$ is determined by its $i$ vertices that are not the root.
 
Hence we can apply Chebyshev's inequality \ref{cor:Chebyshev}, i.e.
 
\begin{IEEEeqnarray}{rCl}
 
P_K \left( \left|\frac{X^{(N)}}{N} - \tilde\rho_i(K) \right| \geq \varepsilon \right)
 
&=& P_K \left( \left|\frac{X^{(N)}}{C} - N \frac{\tilde\rho_i(K)}{C} \right|
 
\geq \frac{\varepsilon}{C} N\right)\nonumber \\
 
&\leq& \frac{4}{\varepsilon^2 C^2 N} \label{eq:Euler}
 
\end{IEEEeqnarray}
 
for any complex $K$.
 
Now we choose $N$ sufficiently large such that this probability is smaller than $\varepsilon$.
 
Thus $\tilde\tau := X^N/N$ tests $\tilde\rho_i$ in constant time.
 
Hence $\tau := \tilde\tau / (i+1)$ tests $\rho$ in constant time.
 
\end{proof}
 
 
\begin{definition}\label{def:Euler}\index{Euler characteristic}
 
The Euler characteristic $\chi(K)$ of a simplicial complex $K\in\Sigma$ is give by
 
\[ \chi(K) := \# K_0 - \# K_2 +  \# K_3 \mp \ldots \]
 
\end{definition}
 
 
\begin{corollary}
 
The Euler characteristic on $\Sigma^d$ is testable in constant time,
 
i.e. the simplicial parameter
 
$\varphi_\chi (K) := \chi(K)/(\# K_0)$ \index{Euler characteristic!as a simplicial parameter}
 
is testable.
 
The tester has to consider only adjacent balls.
 
\end{corollary}
 
 
\begin{proof}
 
Given a vertex degree bound $d\in\N$ and $\varepsilon>0$.
 
We expand
 
$$ \frac{\chi(K)}{\# K_0}
 
= \sum_{k=0}^d (-1)^k \rho_k(K)
 
= 1 + \sum_{k=1}^d (-1)^k \rho_k(K). $$
 
From the previous proposition we know that for all $k=1,\ldots, d$ there is a tester $\tau_k = \tau^{1,N}_k$ on $(Z_{*}^{d,1})^{N(k)}$ such that
 
$P_K\left( \left| \tau_k - \rho_k \right| > \varepsilon/d\right) < \varepsilon/d $ for all $K\in\Sigma^d$.
 
Hence the tester $\tau := 1 - \tau_1 \pm \ldots (-1)^{d-1} \tau_d$ on $(Z_{*}^{d,1})^{N}$ where $N := \max_k N(k)$ does the job,
 
$$P_K\left( \left| \tau - \frac{\chi(K)}{\# V(K)} \right| > \varepsilon\right)
 
\leq \sum_{k=1}^d P_K\left( \left| \tau_k - \rho_i \right| > \frac{\varepsilon}{d}\right)
 
< d \cdot \frac{\varepsilon}{d} = \varepsilon.
 
$$
 
for all $K \in \Sigma^d$.
 
Thus $\varphi_\chi$ is testable in constant time by definition.
 
\end{proof}
 
 
Recall the definition of $r_n$ from \ref{eq:list}. We define simplicial vectors
 
$$ p^M: \tilde\Sigma^d \to [0,1]^{\set{M}} = \left([0,1]^{\set{M}}, |.|_{\infty}\right), \quad
 
K \mapsto (p_m(K))_{m\in M}
 
$$
 
for any $M\in\N$ where $|.|_{\infty}$ denotes the infinity, i.e. in this case maximum, norm.
 
 
\begin{proposition}\label{prop:main}
 
The simplicial vector $p^M $ as defined above is testable in constant time \wrt the infinity norm for all $M\in \N$,
 
i.e. for all $M$ and $\varepsilon > 0$
 
there are natural numbers $r$ and $N$ and a tester $\tau: (Z^{d,r})^N \to \R^{\set{M}}$ such that
 
$$ P_K ( |\tau - p^M  |_\infty \geq \varepsilon ) \leq \varepsilon
 
\qquad \forall K \in \tilde \Sigma^d
 
$$
 
where $r := r_{M-1}$ is the radius of $p_{M-1}$ from \ref{eq:list}.
 
\end{proposition}
 
 
\begin{proof}
 
Of course, it is obvious how our tester should be constructed:
 
Let $X^n: Z^{d,r} \to \R$ for all $n=1,\ldots,N$ be independent random vectors
 
$$X^n(\alpha) = \begin{cases}
 
e_m & \text{if there is a } m\in\set{M}\text{ such that } B_{r_m} (\Root \alpha) \simeq \alpha_m \\
 
0 & \text{otherwise}
 
\end{cases}
 
$$
 
where $e_m$ is the $m$-th canonical unit vector.
 
$X^n$ is bounded by r \wrt the Euclidean norm
 
since there are exactly $r$ restrictions of $\alpha$ to balls of radius $0, \ldots, r-1$.
 
Let $X^{(N)} := \sum_{n=0}^{N-1} X^n$ and $\mu := \expectVal X^1 = \ldots = X^N$.
 
 
Hence we yield for $X^N$
 
\begin{IEEEeqnarray}{rCl}
 
P_K \left( \left|\frac{X^{(N)}}{N} - \mu  \right|_\infty \geq \varepsilon \right)
 
&\leq& P_K \left( \left|\frac{X^{(N)}}{r} - \frac{\mu}{r} N  \right|_2
 
\geq \frac{\varepsilon}{r} N \right) \nonumber  \\
 
&\stackrel{\ref{cor:Chebyshev}}{\leq}& \frac{4r^2}{\varepsilon^2 N} \label{eq:main}
 
\end{IEEEeqnarray}
 
for all $K\in \tilde\Sigma$. Choosing $N$ sufficiently large we yield that $\tau := X^N/N$ is the tester we sought.
 
\end{proof}
 
 
\begin{remark}[realisation of testers]
 
Concerning the computation of testers one may be interested in better bound than given by Chebyshev's inequality.
 
Such a bound follows from a Chernoff bound\index{Chernoff bound}
 
derived applying methods from \cite{Lev} to random vectors.
 
Moreover can can prove the main theorem of the following section
 
using a weaker version of proposition \ref{prop:main},
 
i.e. one can replace $p^M$ by $p^r := (p_m(K))_{r_m = r}$.
 
Thereby we eliminate dependence on $r$.
 
Concretely one can achieve the bounds
 
\begin{IEEEeqnarray*}{rCl/l?L}
 
P_K \left( |\tau - \tilde\rho_i(K)| \geq \varepsilon \right)
 
&\leq& 2 e^{-\varepsilon^2 C^{-2} N}
 
& \forall K\in \tilde\Sigma
 
& \text{instead of \ref{eq:Euler}} \\
 
P_K \left( |\tau - \mu N  |_\infty \geq \varepsilon N \right)
 
&\leq& 2 e^{-\varepsilon^2 N}
 
&\forall K\in \tilde\Sigma & \text{instead of \ref{eq:main}}.
 
\end{IEEEeqnarray*}
 
This encourages to study the generalized problem of
 
$P_K( |\tau - \varphi(K)| \geq \varepsilon_1) \leq \varepsilon_2$
 
since $N$ dependy on $\varepsilon_2$ only in terms of $-\log \varepsilon_2$.
 
%replace corollary \ref{cor:Chebyshev} by corolarry \ref{cor:Chernoff} proven in the appendix.
 
\end{remark}
 
 
 
\section{Main Theorem}%===========================================================================================
 
 
Our main theorem generalizes section 6 of \cite{Elek10}.
 
Let $\tilde\Sigma$ be one of the classes $\tilde\Sigma^d = \Sigma^d, \Sigma^d_O, \Sigma^d_{|4}$ and $\Sigma^d_{|4,O}$.
 
$d$ will be fixed for the rest of this section.
 
Let $\tilde Z_*^d$ be the classes corresponding $\tilde \Sigma$.
 
 
\begin{theorem}[Main theorem]\label{thm:main}
 
Every convergent simplicial parameter $\varphi$ on $S \subset\tilde\Sigma^d$
 
is testable in constant time.
 
\end{theorem}
 
 
Note that one could infer testability of the Euler characteristic
 
from convergence of the simplex densities $\rho_i$ using the main theorem.
 
But one would not gain any properties of the tester.
 
We will proof the main theorem in three steps. We recall the auxiliary function
 
$p^n: \tilde\Sigma^d \to [0,1]^{\set{n}} = \left([0,1]^{\set{n}}, |.|_{\infty}\right)$ from the previous section,
 
that maps  $K$ to $ \{p_m(K)\}_{m\in \set{n}} $.
 
 
 
\begin{lemma}\label{lem:mainThm1}
 
Let $\varphi$ be a convergent simplicial parameter on $S$.
 
For all $\varepsilon>0$ there exists a $\delta>0$ and $n>0$ such that
 
$$\left| p^n(K) - p^n(L)\right|_\infty < \delta 
 
\;\:\Rightarrow\;\:  |\varphi(K) - \varphi(L)| < \varepsilon.
 
$$
 
for all $K, L \in S$.
 
\end{lemma}
 
 
\begin{proof}
 
Assume opposite
 
$$
 
\exists \varepsilon > 0\, \forall \delta > 0\, \forall n\, \exists K,L :
 
| p^n(K) - p^n(L)|_\infty < \delta
 
\wedge |\varphi(K) - \varphi(L)| \geq \varepsilon.$$
 
For $(\delta, n)=(\frac{1}{1}, 1),(\frac{1}{2}, 2),\ldots$ let $K^n$ and $L^n$ be the simplicial complexes of the kind, those existence is guaranteed by the preceding statement. Moreover we define some map
 
$$ p: S \to [0,1]^\N, \quad
 
K \mapsto \left\{p_m(K) \right\}_{m=0}^\infty .$$
 
 
$[0,1]^\N$ equipped with the product topology is sequential compact
 
since $[0,1]$ is sequential compact and $\N$ is countable \cite[I.7.4 thm.~6]{Sch75}.
 
If $\{ p (K^n)\}_{n\in\N}$ ranges over finitely many elements,
 
choose one element that contains infinitely many counter images $K^{n(1)}, K^{n(2)}, \ldots$
 
such that $n(1) < n(2) < \ldots$ is satisfied.
 
Otherwise choose a counter image $K^{n(1)}, K^{n(2)}, \ldots$ of a convergent subsequence of $\{p (K^n)\}_r$.
 
In both cases $K^{n(1)}$, $K^{n(2)}, \ldots$ converge
 
due to convergence of $ p (K^{n(1)})$, $ p (K^{n(2)}), \ldots$ and definition of product topology.
 
Thus $K^{n(1)}, L^{n(1)}, K^{n(2)}, L^{n(2)}, \ldots$ converges as well; but $\varphi(K^{n(1)})$, $\varphi(L^{n(1)})$, $\varphi(K^{n(2)})$, $\varphi(L^{n(2)}), \ldots$ does not. Contradiction.
 
\end{proof}
 
 
\begin{lemma}\label{lem:mainThm2}
 
For all $ \delta>0$ and $n\in\N$ there exist $ L^0, L^1, \ldots, L^{m-1} \in S $
 
such that for all $K\in S$ one may find some $l\in \set{m}$ such that
 
\[ | p^n(K) - p^n(L^l) |_\infty < \delta.\]
 
\end{lemma}
 
 
\begin{proof}
 
The closure of the image of $p^n$ is compact since $|p^n|_\infty$ bounded by 1.
 
Hence we may choose a finite set $x^1,\ldots, x^m\in [0,1]^n$
 
such that the image of $p^n$ is covered by $\delta/2$-balls around $x^1,\ldots, x^m$ \wrt $|.|_{\infty}$.
 
Now we choose $L^l$ such that $L^l\in B_{\delta/2}(x^l)$ for all $l\in\set{m}$.
 
For every $K$ the value $p^n(K)$ is contained in a $B_{\delta/2}(x_l)$ for at least one $l\in m$,
 
thus $| p^n(K) - p_n(L^l)|_\infty < \delta$.
 
\end{proof}
 
 
\begin{proof}[Proof of \ref{thm:main}]
 
Fix $\varepsilon>0$. Choose $\delta$ and $n$ according to lemma \ref{lem:mainThm1}.
 
Using lemma \ref{lem:mainThm2} we choose $ L^0, L^1, \ldots, L^{m-1} \in S $ such that for all $K\in S$ one may find some $l\in \set{m}$ such that
 
\[ | p^n(K) - p^n(L^l) |_\infty < \frac{\delta}{4}.\]
 
Moreover proposition \ref{prop:main} provides us a tester $\tau$ such that
 
$$ P_K \left( |\tilde\tau - p^n  |_\infty \geq \min\left\{\frac{\delta}{4}, \varepsilon\right\} \right)
 
\leq  \min\left\{\frac{\delta}{4}, \varepsilon\right\}
 
\qquad \forall K \in \tilde \Sigma^d.
 
$$
 
Let $\tau$ be the tester that takes the result of $\tau$ and gives back the value $\varphi(L^l)$,
 
where $L^l$ minimizes the distance $|p^n(K) - p^n(L^l)|$ for $l\in\set{m}$.
 
 
Finally we got the following estimate \wrt the infinity norm:
 
\begin{center}
 
\begin{tikzpicture}[vertices/.style={draw, fill=black, circle, inner sep=0.5pt}, scale=1.2]
 
\node[vertices, label=above right:{$\tau$}] (origin) at (0,0) {};
 
\node[vertices, label=below left:{$p^n(L^k)$}] at (1.5,1.8) {};
 
\node[vertices, label=left:{$p^n(K)$}] at (.8,.9) {};
 
\node[vertices, label=above:{$p^n(L^l)$}] (Ll) at (1.4,-1.5) {};
 
%\node[vertices, label={$\infty$}] (infty) at (0,0) {};
 
\draw (-2,-2) -- (2,-2) -- (2,2) -- (-2,2) -- cycle;
 
\draw (-.2,-.1) -- (1.8,-.1) -- (1.8,1.9) -- (-.2,1.9) -- cycle;
 
\draw (Ll) -- (.8,-1.5);
 
 
\draw [decorate,decoration={brace,  amplitude=5, mirror}] (origin) --
 
node [black,midway, above= 2] {$\delta/2$} (-2,0);
 
\draw [decorate,decoration={brace,  amplitude=5}] (.8,.9) --
 
node [black,midway, right= 7] {$\delta/2$} (.8,-.1);
 
\draw [decorate,decoration={brace,  amplitude=5, mirror}] (.8,.9) --
 
node [black,midway, left= 7] {$<\delta$} (.8,-1.5);
 
\end{tikzpicture}
 
\end{center}
 
Given any $K\in S$. With probability more than $1 -\varepsilon$ the tester $\tilde\tau$ is in the closed $\delta/4$-neighborhood of the true result $p^n(K)$.
 
The $L^l$ minimizing the distance $|\tau - p^n(L^l)|$ is in the open $\delta/2$-neighborhood of $\tilde\tau$
 
since $L^k$ minimizing the distance $|p^n(K) - p^n(L^k)|$ is in the open $\delta/4$-neighborhood of $p^n(K)$.
 
From
 
$$|p^n(L^l) - p^n(K) |_\infty
 
\leq \left|p^n(L^l) - \tau \right|_\infty + \left| \tau - p^n(K) \right|_\infty
 
\leq  \frac{\delta}{2} + \frac{\delta}{4} < \delta
 
$$
 
follows $|\varphi\circ \tau - \varphi(K)|<\varepsilon$ by lemma \ref{lem:mainThm1}.
 
Hence with probability more than $1 -\varepsilon$ the deviation of the tester $\tau$ is less than $\varepsilon$.
 
\end{proof}
 
 
%Being convergent (proposition \ref{lem:convNumb_iSimpl} and lemma \ref{prop:convEuler}),
 
%the following simplicial parameters converge.
 
%
 
%\begin{corollary}\label{cor:testSimplices}
 
% The simplicial parameters $\rho_i$ on $\tilde\Sigma^d$ converge.
 
%\end{corollary}
 
%
 
%\begin{corollary}\label{cor:testEuler}
 
% The Euler characteristic is testable in constant time,
 
% i.e. the simplicial parameter $\varphi_\chi$ on $\Sigma^d$ is testable in constant time.
 
%\end{corollary}
 

Aktuelle Version vom 24. September 2011, 01:02 Uhr

$$

 hello world

$$

Ansichten
Meine Werkzeuge