Failure of the trilinear operator space Grothendieck theorem

- Algorithms and Complexity group, Centrum Wiskunde & Informatica
- More about Jop Briët

- Análisis Matemático y Matemática Aplicada, Universidad Complutense de Madrid
- More about Carlos Palazuelos

*Discrete Analysis*, June. https://doi.org/10.19086/da.8805.

### Editorial introduction

Failure of the trilinear operator space Grothendieck theorem, Discrete Analysis 2019:8, 16 pp.

Let β:ℓn∞×ℓn∞→C be a bilinear form. We define its operator norm by the formula

‖β‖=sup{|β(x,y)|:‖x‖∞=‖y‖∞=1}.

Let (Aij) be the matrix of β, so that

β(x,y)=∑ijAijxiyj.

This second formula can be generalized in interesting ways. For instance, we can replace the coefficients xi and yj of x and y by vectors that live in a Hilbert space H, and replace the formula by

˜β(x,y)=∑ijAij⟨xi,yj⟩.

It is natural to define ‖x‖∞ to be maxi‖xi‖2 when x is a vector-valued sequence like this, which allows us to define a norm for ˜β using the same formula (appropriately interpreted) as for β. An important inequality of Grothendieck states that there is an absolute constant K such that ‖β‖≤‖˜β‖≤K‖β‖. Grothendieck also showed (using the Hahn-Banach theorem) that a consequence of this inequality is that there always exists a matrix (˜Aij) of operator norm at most K‖β‖ and unit vectors λ,μ∈Cn such that Aij=λi˜Aijμj for every i,j. Writing λ,μ also for the multipliers that multiply the ith coordinate by λi and μi, respectively, and ˜α for the linear map with matrix (˜Aij), we obtain from this the formula

β(x,y)=⟨˜αλx,μy⟩

Note that λ and μ are linear maps from ℓn∞ to ℓn2 and that they have norm 1. Setting Ψ1=˜αλ and Ψ2=μ, we therefore have a factorization

β(x,y)=⟨Ψ1x,Ψ2y⟩

where Ψ1 and Ψ2 are linear maps from ℓn∞ to ℓn2 with ‖Ψ1‖‖Ψ2‖≤K‖β‖.

The space ℓn∞ with pointwise multiplication is a prototypical example of a commutative C∗-algebra, and the result above generalizes straightforwardly to all commutative C∗-algebras. Grothendieck conjectured that it could also be generalized to noncommutative C∗-algebras, a conjecture that was eventually proved many years later by Gilles Pisier under certain extra assumptions and Uffe Haagerup in full generality.

A further direction of generalization concerns a concept known as an *operator space*, which, very roughly, is what you get if you take account not just of standard operator norms but also of a sequence of related norms on matrices. Observe that if A and B are two algebras and β:A×B→C is a bilinear form, then for each positive integer d we can build out of β a bilinear map βd:Md(A)×Md(B)→Md(C) in a natural way, where Md(A) and Md(B) are the spaces of d×d matrices with coefficients in A and B, respectively. Indeed, if A∈Md(A) and B∈Md(B), then we define

βd(A,B)ij=∑kβ(Aik,Bkj).

This can be thought of as standard matrix multiplication, but using the bilinear form β in order to “multiply” individual coefficients together.

One then says that the bilinear form β is *completely bounded* if the norms of the bilinear maps βd form a bounded sequence, and the supremum of this sequence is called the *completely bounded* norm ‖β‖cb.

The completely bounded norm has the drawback that it is not invariant under taking the transpose (in a sense that is not hard to make precise). However, there is a simple symmetrization procedure that gives rise to a symmetric version, denoted ‖.‖sym.

While matrix multiplication is an obvious bilinear map on Md(C)×Md(C), it is not the only natural one. Another is the tensor product, which takes Md(C)×Md(C) to Md2(C). We can describe it explicitly by identifying {1,…,d}2 with {1,…,d2} in a natural way, and then using the formula

(A⊗B)(i1,i2),(j1,j2)=Ai1j1Bi2j2.

As with matrix multiplication, this then has an easy algebra-valued generalization. With A,B,β,A,B as before, we can define

˜βd(A,B)(i1,i2),(j1,j2)=β(Ai1j1,Bi2j2).

This allows us to associate with the bilinear form β another sequence of norms, namely the operator norms of the ˜βd. If this sequence is bounded, then β is said to be *jointly completely bounded*, and the supremum is denoted by ‖β‖jcb.

It turns out that an appropriate operator-space generalization of Grothendieck’s theorem is the statement that ‖β‖jcb≤‖β‖sym≤K‖β‖jcb for some absolute constant K, which, it turns out, can be taken to be 2.

Another natural direction of generalization is from bilinear forms to trilinear forms. It is not obvious what the right statement should be – for example, with the first formulations above, it is not obvious what trilinear form should play the role of the inner product in a Hilbert space. However, interest in this possibility grew with the operator space version, since the definitions just given can be generalized fairly straightforwardly to trilinear forms, which suggested that perhaps a satisfactory trilinear Grothendieck statement had been found.

However, this paper dashes that hope, in the process answering a question of Gilles Pisier, by giving a counterexample. Interestingly, and very appropriately for a paper in Discrete Analysis, the proof makes use of ideas from additive combinatorics. In particular, it makes use of a noncommutative version of the generalized von Neumann inequality. This allows the authors to bound the jointly completely bounded norm of a trilinear form built out of a function f, defined on a finite Abelian group, in terms of the U3-norm of f. This is coupled with an argument of Varopoulos that provides a significantly larger lower bound for the symmetrized completely bounded norm.