Failure of the trilinear operator space Grothendieck theorem
- Algorithms and Complexity group, Centrum Wiskunde & Informatica
- More about Jop Briët
- Análisis Matemático y Matemática Aplicada, Universidad Complutense de Madrid
- More about Carlos Palazuelos
Editorial introduction
Failure of the trilinear operator space Grothendieck theorem, Discrete Analysis 2019:8, 16 pp.
Let β:ℓn∞×ℓn∞→C be a bilinear form. We define its operator norm by the formula
‖β‖=sup
Let (A_{ij}) be the matrix of \beta, so that
\beta(x,y)=\sum_{ij}A_{ij}x_iy_j.
This second formula can be generalized in interesting ways. For instance, we can replace the coefficients x_i and y_j of x and y by vectors that live in a Hilbert space H, and replace the formula by
\tilde\beta(x,y)=\sum_{ij}A_{ij}\langle x_i,y_j\rangle.
It is natural to define \|x\|_\infty to be \max_i\|x_i\|_2 when x is a vector-valued sequence like this, which allows us to define a norm for \tilde\beta using the same formula (appropriately interpreted) as for \beta. An important inequality of Grothendieck states that there is an absolute constant K such that \|\beta\|\leq\|\tilde\beta\|\leq K\|\beta\|. Grothendieck also showed (using the Hahn-Banach theorem) that a consequence of this inequality is that there always exists a matrix (\tilde A_{ij}) of operator norm at most K\|\beta\| and unit vectors \lambda,\mu\in\mathbb C^n such that A_{ij}=\lambda_i\tilde A_{ij}\mu_j for every i,j. Writing \lambda,\mu also for the multipliers that multiply the ith coordinate by \lambda_i and \mu_i, respectively, and \tilde\alpha for the linear map with matrix (\tilde A_{ij}), we obtain from this the formula
\beta(x,y)=\langle\tilde\alpha\lambda x,\mu y\rangle
Note that \lambda and \mu are linear maps from \ell_\infty^n to \ell_2^n and that they have norm 1. Setting \Psi_1=\tilde\alpha\lambda and \Psi_2=\mu, we therefore have a factorization
\beta(x,y)=\langle\Psi_1x,\Psi_2y\rangle
where \Psi_1 and \Psi_2 are linear maps from \ell_\infty^n to \ell_2^n with \|\Psi_1\|\|\Psi_2\|\leq K\|\beta\|.
The space \ell_\infty^n with pointwise multiplication is a prototypical example of a commutative C^*-algebra, and the result above generalizes straightforwardly to all commutative C^*-algebras. Grothendieck conjectured that it could also be generalized to noncommutative C^*-algebras, a conjecture that was eventually proved many years later by Gilles Pisier under certain extra assumptions and Uffe Haagerup in full generality.
A further direction of generalization concerns a concept known as an operator space, which, very roughly, is what you get if you take account not just of standard operator norms but also of a sequence of related norms on matrices. Observe that if \mathcal A and \mathcal B are two algebras and \beta:\mathcal A\times\mathcal B\to\mathbb C is a bilinear form, then for each positive integer d we can build out of \beta a bilinear map \beta_d:M_d(\mathcal A)\times M_d(\mathcal B)\to M_d(\mathbb C) in a natural way, where M_d(\mathcal A) and M_d(\mathcal B) are the spaces of d\times d matrices with coefficients in \mathcal A and \mathcal B, respectively. Indeed, if A\in M_d(\mathcal A) and B\in M_d(\mathcal B), then we define
\beta_d(A,B)_{ij}=\sum_k\beta(A_{ik},B_{kj}).
This can be thought of as standard matrix multiplication, but using the bilinear form \beta in order to “multiply” individual coefficients together.
One then says that the bilinear form \beta is completely bounded if the norms of the bilinear maps \beta_d form a bounded sequence, and the supremum of this sequence is called the completely bounded norm \|\beta\|_{\mathrm{cb}}.
The completely bounded norm has the drawback that it is not invariant under taking the transpose (in a sense that is not hard to make precise). However, there is a simple symmetrization procedure that gives rise to a symmetric version, denoted \|.\|_{\mathrm{sym}}.
While matrix multiplication is an obvious bilinear map on M_d(\mathbb C)\times M_d(\mathbb C), it is not the only natural one. Another is the tensor product, which takes M_d(\mathbb C)\times M_d(\mathbb C) to M_{d^2}(\mathbb C). We can describe it explicitly by identifying \{1,\dots,d\}^2 with \{1,\dots,d^2\} in a natural way, and then using the formula
(A\otimes B)_{(i_1,i_2),(j_1,j_2)}=A_{i_1j_1}B_{i_2j_2}.
As with matrix multiplication, this then has an easy algebra-valued generalization. With \mathcal A,\mathcal B,\beta,A,B as before, we can define
\tilde\beta_d(A,B)_{(i_1,i_2),(j_1,j_2)}=\beta(A_{i_1j_1},B_{i_2j_2}).
This allows us to associate with the bilinear form \beta another sequence of norms, namely the operator norms of the \tilde\beta_d. If this sequence is bounded, then \beta is said to be jointly completely bounded, and the supremum is denoted by \|\beta\|_{\mathrm{jcb}}.
It turns out that an appropriate operator-space generalization of Grothendieck’s theorem is the statement that \|\beta\|_{\mathrm{jcb}}\leq\|\beta\|_{\mathrm{sym}}\leq K\|\beta\|_{\mathrm{jcb}} for some absolute constant K, which, it turns out, can be taken to be 2.
Another natural direction of generalization is from bilinear forms to trilinear forms. It is not obvious what the right statement should be – for example, with the first formulations above, it is not obvious what trilinear form should play the role of the inner product in a Hilbert space. However, interest in this possibility grew with the operator space version, since the definitions just given can be generalized fairly straightforwardly to trilinear forms, which suggested that perhaps a satisfactory trilinear Grothendieck statement had been found.
However, this paper dashes that hope, in the process answering a question of Gilles Pisier, by giving a counterexample. Interestingly, and very appropriately for a paper in Discrete Analysis, the proof makes use of ideas from additive combinatorics. In particular, it makes use of a noncommutative version of the generalized von Neumann inequality. This allows the authors to bound the jointly completely bounded norm of a trilinear form built out of a function f, defined on a finite Abelian group, in terms of the U^3-norm of f. This is coupled with an argument of Varopoulos that provides a significantly larger lower bound for the symmetrized completely bounded norm.