Exponential sums with reducible polynomials

Hooley proved that if $f\in \Bbb Z [X]$ is irreducible of degree $\ge 2$, then the fractions $\{ r/n\}$, $0<r<n$ with $f(r)\equiv 0\pmod n$, are uniformly distributed in $(0,1)$. In this paper we study such problems for reducible polynomials of degree $2$ and $3$ and for finite products of linear factors. In particular, we establish asymptotic formulas for exponential sums over these normalized roots.


Introduction
Let f (X) be a polynomial of degree at least 2 with integer coefficients, and let h be a nonzero integer. We consider for x ≥ 1 the exponential sums with the standard notation e(t) = exp(2iπt); our interest is in fixed f and h while x tends to infinity. In the case h = 1 we will write simply S( f , x). Hooley [11,Theorem 1] proved that if f is irreducible, then S( f , x, h) = o(x) when x → ∞. By Weyl's criterion, this implies that the fractions r/n, where 0 < r < n and f In all of the above results, the polynomial f is assumed to be irreducible; what can we say about the sums S( f , x, h) when f is reducible? In this paper we examine reducible polynomials of degree 2 and 3 (in which case f has at least one linear factor) and on polynomials that factor completely into linear factors. The hope is to obtain, not just an upper bound, but an actual asymptotic formula for S( f , x, h), analogously to the situation described above where f is itself linear.
For example, Sitar and the second author [15] obtained a general bound for reducible quadratic polynomials f with discriminant D: Since the number of summands of S( f , x, h) has order x log x in this case, a consequence is that the the fractions r/n, where 0 < r < n and f (r) ≡ 0 (mod n), are uniformly distributed in ]0, 1[ for reducible quadratic polynomials. However, this bound is still large enough to disguise a potential asymptotic main term of size x caused by the roots of the linear factors of f . In our first theorem, which is proved in Section 3, we provide such an asymptotic formula for exponential sums with reducible quadratics: Theorem 1. Let a, b, c, d be fixed integers with ac = 0, (a, b) = (c, d) = 1, and ad = bc, and set f (n) = (an + b)(cn + d). Then for any nonzero integer h, there exists C( f , h) ∈ C such that for any ε > 0, we have S( f , x, h) = xC( f , h) where the implicit constant depends on f , h, and ε. When (h, ad − bc) = 1, the error term in equation (3) can be improved to O(x 3/4+ε ).
The proof of this theorem provides an explicit but complicated formula for C( f , h) (see equation (13) below). In the particular case h = 1, we obtain S( f , x) = xC( f , 1) (Note in particular that it is possible for C( f , 1) to equal 0, namely if neither a nor c is squarefree; in such a case, Theorem 1 is technically an upper bound rather than an asymptotic formula. The analogous pedantic comment applies to Theorem 4 below.) Our second result handles the case f (n) = n(n 2 + 1). In Section 4 we will prove: Theorem 2. For f (n) = n(n 2 + 1), we have It is also possible to generalize Theorem 2 to S( f , x, h) for a general nonzero integer h, as in the work of Hooley, though we do not do so here. It is likely possible to extend Theorem 2 to general products of two polynomials f 1 f 2 with f 1 linear and f 2 irreducible quadratic by adapting some ideas of [18] or [1]. However, such a generalization with a general f 2 in place of n 2 + 1 is not straightforward; the case where f 2 has positive discriminant seems more difficult. The next result, which we prove in Sections 5 and 6.1, handles in detail a special product of three linear polynomials: Theorem 3. For f (n) = n(n + 1)(2n + 1), we have It was clear that this result could be generalized to arbitrary products of three linear polynomials. However, after reading an earlier version of this manuscript, the anonymous referee pointed out that the method can be extended to give an asymptotic formula for S( f , x, h) when f is the product of four linear polynomials, as well as a nontrivial upper bound for S( f , x, h) when f is a product of any number of linear polynomials. We prove the following result in Sections 6.2 and 7: Theorem 4. Let h be a nonzero integer, let k ≥ 3 be an integer, and let f (n) = ∏ k i=1 (a i n + b i ) where the coefficients a 1 , b 1 , . . . , a k , b k are integers satisfying (a j , b j ) = 1 for all 1 ≤ i ≤ k and a i b j = a j b i for all In both cases, the implicit constants depend on f and h.
As the number of fractions r/n, where 0 < r < n and f (r) ≡ 0 (mod n), for all n ≤ x has order of magnitude x(log x) κ−1 , where κ is the number of distinct irreducible factors of f , these theorems all imply that the appropriate sequences of fractions are uniformly distributed in ]0, 1[. It is also worth remarking that it is only for convenience that we hypothesize that the linear factors in Theorems 1 and 4 are not proportional to one another. Indeed, more generally consider a polynomial f = g 2 h: the roots of f modulo squarefree integers n are unaffected by the repeated factor g 2 , while if p 2 | n then the roots of g 2 (mod n) form arithmetic progressions with common difference n/p, and thus their contribution to the exponential sum S( f , x, h) vanishes completely.
Before proceeding to the proofs of our theorems, we establish in the next section a lemma that is used repeatedly throughout the paper. We adopt the following notation and conventions throughout this paper: when a and b are relatively prime integers,ā b will denote an integer such thatā b a ≡ 1 (mod b). Furthermore, when the context will be clear, we will also use the simplified notationā for this multiplicative inverse of a to the implied modulus b, which is often the denominator of the fraction in whose numeratorā appears. The letter p usually denotes a prime number. We adopt the convention throughout that e(t) = exp(2iπt) unless t contains an expression of the formā b where (a, b) > 1, in which case e(t) = 0. For example, the expression ∑ 0≤n≤p−1 e(n (n−1) p ) is to be interpreted as 0 + 0 + ∑ 2≤n≤p−1 e(n p (n−1) p p ).

Exponential sums involving multiplicative inverses
We begin by establishing the following lemma on exponential sums (a slight variation of a result of Hooley [12,Lemma 3]), which is the crucial tool of our paper.
In particular, (While we have written the statement of the lemma, for ease of reference, with explicit subscripts on the multiplicative inversesn q and explicit coprimality conditions of summation, we immediately revert to our conventions of suppressing these notational signals.) Proof. We begin by collecting the summands according to the value of n (mod q), which we then detect using a further additive character: with the convention e(tā/q) = 0 if (a, q) > 1 as mentioned in the end of the introduction. The h = 0 summand contributes the main term: since the first sum on the left-hand side is a complete Ramanujan sum which has been evaluated classically (see [16, equation (4.7)]). As for the summands where h = 0, the first inner sum on the right-hand side of equation (5) is a complete Kloosterman sum, which was shown by Hooley [9, Lemma 2] (using Weil's bounds for exponential sums) to be q(t, q) · τ(q); therefore since (m, q) = 1 and thus the change of variables η = hm permutes the nonzero residue classes modulo q. This last sum is a geometric series and is consequently η/q −1 , where u is the distance from u to the nearest integer. Therefore we have the following bound for these summands: as required.

Reducible quadratics (Theorem 1)
Throughout this section, we consider f (n) = (an + b)(cn + d) with a, b, c, d as in Theorem 1. All implicit constants in this section may depend on f , h, and (where appropriate) ε. Define ∆ = ad − bc. The simultaneous congruences an + b ≡ cn + d ≡ 0 (mod p) have a solution only when −bā ≡ −dc (mod p), or equivalently when p | ∆. We start with the case (h, ac∆) = 1; in the following section we indicate how to deal with the general case.

The case (ac∆, h) = 1
We begin by handling some coprimality problem between the denominators n and ∆.
Lemma 2. We may write Proof. First we sort by g = (n, ∆): If p | g | ∆ and p | a then p | c as well, in which case (ax + b)(cx + d) ≡ bd ≡ 0 (mod p) and there are no roots modulo p. Similarly, if p | (g, c), then f has no root modulo p. Thus we can add the condition of summation (g, ac) = 1.
In particular, we may assume that g is squarefree (whence the introduction of the factor µ 2 (g)) and that (m, g) = 1 (which combines with the existing condition (m, ∆/g) = 1 to yield simply (m, ∆) = 1), completing the proof.
Next we separate the congruence conditions between the two factors of f . Lemma 3. Suppose that g | ∆ is squarefree, (g, c) = 1, and (m, ∆) = 1. The roots of (ar + b)(cr + d) ≡ 0 (mod mg) are in one-to-one correspondence with the factorizations k = m with (k, ) = 1, (k, a) = 1, and ( , c) = 1. The root r corresponds to the solution of the system of congruences where r g is the residue class r g ≡ −dc (mod g).
Proof. It is straightforward to check that the factorization corresponding to a root r of (ar + b)(cr + d) ≡ 0 (mod mg) is k = (ar + b, m) and = (cr + d, m), and that this correspondence is the inverse function to the correspondence described in the statement of the lemma.
By the Chinese remainder theorem, the solution of the system of congruences given in Lemma 3 can be written as r = −bā k ¯ k gḡ k − dc kk gḡ + r g kk g ¯ g .
We thus obtain −hbā k ¯ k gḡ k − hdc kk gḡ + hr g kk g ¯ g mg e −hbā k¯ kḡk k e −hdc k ḡ e hr gkg¯ g g .
We split this into two sums S( f , In S 1 (x), we use the standard "inversion formula" (obtained with Bézout's identity) for (u, v) = 1, This formula, used on the second exponential factor in the inner sum above, allows us to move out of the denominators of the exponential terms and write S 1 (x) as ∑ g|∆ (g,ac)=1 −hbā k¯ kḡk k e hd¯ cgk cgk e −hd cgk e hr gkg¯ g g .
Next we remark that e(−d/cgk ) = 1 + O(1/gk ). This effect of this error term is sufficiently small: We remind the reader that the implicit constants in this section may depend on f , h, and ε.) We rewrite the inner sum over as and then apply equation (4) with q = ck∆ and t = ht , where t = ∆ g (−bcgā kḡk + d + ckr gkg ). Since d + ckr gkg ≡ d + cr g ≡ 0 (mod g), we see that t is a multiple of ∆ but is relatively prime to both c and k.
We thus obtain In the previous formula we could insert the condition (c, k) = 1 in the main term, because the fact that denote the main term. By the coprimality conditions among a, c, ∆, k, we get Let λ h denote the function in the inner sum over k: In the particular case h = 1, this gives In this last computation, we have used the fact that if p | (c, ∆), then p | a. In the same way, we derive the asymptotic formula which, when h = 1, simplies to This finishes the proof of Theorem 1 in the case (h, ac∆) = 1.

The case (h, ac∆) = 1
The main difference when dealing with the case (h, ac∆) = 1 is that we lose some cancellation observed in the proof of Lemma 2. We use the notation m | n ∞ to indicate that p | m ⇒ p | n, and we sort the summands according to the prime factors they share with (h, ac∆): Since deg f = 2, the number of roots of f (r) ≡ 0 (mod δ n) is less than 2 ω(δ n) . Defining B = x 1/5−ε for some sufficiently small ε > 0, we split this sum as . When δ is large, a trivial upper bound is sufficient: When δ is not large, we adapt the method of the previous section. Since (n, δ ) = 1, we have by the Chinese remainder theorem: In order to suppress the dependence of n in the summation in r 0 , we split this sum according to n modulo δ : Let G(α, δ ) denote the above sum over r 0 and H(α, δ ) the double sum over n and r 1 , so that It is convenient to notice now that the number of summands in the sum defining G(α, δ ) is at most the number of roots of f (r) (mod δ ), and hence that sum is bounded by a constant depending on f : indeed, a bound for that number of roots by Sitar and the second author [15,Lemma 3.4] implies that We handle the term H(α, δ ) in the same way as in the previous section. We write The corresponding sum S 1 (x) is then of type Since (α, δ ) = 1, the condition gk ≡ α (mod δ ) implies that (g, δ ) = 1. The analogue of equation (7) is now in which the only real difference from before is the congruence gk ≡ α (mod δ ). Let δ 2 be the least common multiple δ 2 = [δ , ∆ 2 ]; we still have (δ 2 , ck∆ 1 ) = 1. The two conditions g k ≡ α (mod δ ) and ( , ∆ 2 ) = 1 can be expressed using congruences modulo δ 2 : We apply Lemma 1 in the same way as in the case (∆, h) = 1 to obtain: If we open the sum G(α, δ ) and exchange the order of summation with α, we find Ramanujan sums: Since G(α, δ ) 1 as previously remarked, the sum over δ in the main term converges as B tends to ∞ (note that the sum is not over all integers δ but rather only those integers with prime factors in a fixed finite set, which is a very sparse sequence), and the error resulting from replacing the finite sum over δ with the infinite series is x/B 1−ε . In the error term, the δ 1/4 roughly comes from the summation of the k 1/2 with k ≤ (x/gδ ) 1/2 and with a trivial summation of the sum over α. This error term is O( This ends the proof of Theorem 1, with where C(h, a, c, ∆ 1 ) is as in equation (8).

Linear times an irreducible quadratic (Theorem 2)
In this section we prove Theorem 2 concerning the polynomial f (t) = t(t 2 + 1).

First step: splitting S( f , x)
Since (n, n 2 + 1) = 1 we can write S( f , x) as a sort of convolution, as in the previous section. The following lemma is elementary: Let g(t) be a polynomial with integer coefficients with g(0) = ±1, and let m be a positive integer. The roots of rg(r) ≡ 0 (mod m) are in one-to-one correspondence with the factorizations k = m with (k, ) = 1 and corresponding roots v of g(v) ≡ 0 (mod m). The root r corresponds to the solution modulo m of the system of congruences We have by Lemma 4 Let We shall see momentarily that in S 1 (x), it is possible to use equation (4) in the same way as in the proof of Theorem 1; the main term in our asymptotic formula for S( f , x) arises from this sum. This approach works only when k is sufficiently large (or sufficiently small), which is to say when k is slightly bigger than x 1/3 . This is our motivation for the choice of y 2 .
The converse is true for S 3 (x), in which is the largest parameter. In this case we use the fact that the second factor is quadratic. We can apply a lemma of Gauss on the correspondence of the roots of n 2 + 1 ≡ 0 (mod ) and certain representations = r 2 + s 2 as the sum of two squares. This approach works when k = o(x 1/3 ), which is why we choose y 1 close to x 1/3 . The remaining range k ∈ ]y 1 , y 2 ] is covered by a direct application of Hooley's result [11]. Since y 1 and y 2 are close together, Hooley's general bound applied to the irreducible polynomial X 2 + 1 is sufficient.

The first two sums
In the sum S 1 (x), the variable k is large and thus we arrange for some cancellation in the sum over this variable: Let ρ(m) denote the number of roots modulo m of the polynomial n 2 + 1: For any < x/y 2 and any 0 ≤ v < with v 2 + 1 ≡ 0 (mod ), we apply equation (4) to bound the inner sum in k: In the third equality above, we used the calculation followed by the fact that ρ(p k ) ≤ 2 for all prime powers p k . (We could in fact replace the exponent 5 − B by 2 − 3B/2, using the fact that ρ(p k ) = 0 if p ≡ 3 (mod 4), but that improvement is not significant for our purposes.) In the sum S 2 (x), the variable k is in a crucial range (corresponding to when the size of k is close to √ ) where the methods for both S 1 (x) and S 3 (x) fail. The bound for S 2 (x) will be a direct consequence of the work of Hooley: Lemma 5. Let P(X) ∈ Z[X] be an irreducible polynomial of degree n ≥ 2. If hk = 0 then we have where δ n = (n − √ n)/n!.
The case k = 1 is [11, Theorem 1]; the proof can be adapted with no difficulty for all k ∈ N and provides then a result that is uniform in k. The dependence on h in this result of Hooley is due only to the appearance of (h, ) in certain intermediate computations.
We apply this lemma with P(X) = X 2 + 1 and replacing x by x/k: using the fact that log(y 2 /y 1 ) log log x.

The sum S 3 (x)
In this section we use the special shape of the polynomial n 2 + 1 to find an upper bound for S 3 (x). Following the ideas of the two articles of Hooley [10,12] concerning τ(n 2 + 1) and P + (n 2 + 1) (the number of divisors of n 2 + 1 and the largest prime factor of n 2 + 1, respectively), we employ the Gauss correspondence between the roots of v 2 + 1 ≡ 0 (mod ) and certain representations of = r 2 + s 2 as the sum of two squares. Indeed for such integers r, s with (rs, ) = 1, we have (rs) 2 + 1 ≡ 0 (mod ). The parameterk in the exponential gives rise to some coprimality problems. The first author [3] resolved such a difficulty when k is squarefree; in equation (15) we will use an elegant formula of Wu and Xi [20] to handle the general case.
As in the proof of Theorem 1, in the following argument the condition k 2 = (k, r ∞ ) means that p | k 2 ⇒ p | r and (k/k 2 , r) = 1.
Lemma 6. For > 1, there is a one-to-one correspondence between the representations of by the form = r 2 + s 2 with (r, s) = 1, r > 0, s > 0 and the solutions of the congruence v 2 + 1 ≡ 0 (mod ). This bijection is given by: The first part of this lemma is proved in detail in the book of Smith [19,Art. 86], while equation (15) is [20,Lemma 7.4]. By this lemma, still using the notation k = k 1 k 2 , we have First we remove the term e r/ks(r 2 + s 2 ) : since e r ks(r 2 + s 2 ) replacing this term by 1 results in a corresponding error in S 3 (x) that is O((log x) 4 ).
Following the notation of several authors, we denote by (k 1 s) and (k 1 s) the squarefree and squarefull part, respectively, of k 1 s. Since ((k 1 s) , (k 1 s) ) = 1, we can use the Chinese remainder theorem as in the proof of Theorem 1: Inserting this in S 3 (x), we obtain with Let S 4 (x) denote the contribution to S 3 (x) of the k 1 , k 2 , r, s such that (k 1 s) > (log x) 45 or k 2 > (log x) 5 , and S 5 (x) the remaining contribution, that is, the contribution of the k 1 , k 2 , r, s such that k 2 ≤ (log x) 5 and We remark that if m 2 is the largest square divisor of (k 1 s) then m 2 ≥ ((k 1 s) ) 2/3 . We deduce that when (k 1 s) > (log x) 45 , there exists m > (log x) 15 such that m 2 | (k 1 s) . We can write this divisor in the following way: m 2 = u 2 v 2 w 2 with u 2 | k 1 , v 2 | s, and w | (k 1 , s). Thus we have max{u 2 , v 2 , w 2 } ≥ m 2/3 and there exists d > (log x) 5 such that d 2 | k 1 or d 2 | s or d | (k 1 , s). In the first case (when d 2 | k 1 ), the contribution of the k 1 , k 2 , r, s is less than Similarly, in the second case (when d 2 | s), we have a contribution less than Finally the contribution of the terms with d | (k 1 , s) is less than It remains to evaluate the contribution to S 4 (x) of the terms where k 2 > (log x) 5 . Since k 2 = (k 1 k 2 , r ∞ ), we have q(k 2 ) | r where q(k 2 ) = ∏ p|k 2 p is the squarefree kernel of k 2 . Thus, the contribution of r is bounded by x 1/2 /(q(k 2 )(k 1 k 2 ) 1/2 ), and then the corresponding summation of all the k 1 , k 2 , r, s with k 2 > (log x) 5 is less than The rest of this section is devoted to the sum S 5 (x), which can be written as If we replace r by q(k 2 )r , the sum over r has the shape for some quantity R = R(s, k 1 , . It is then standard to complete the sum: As before, the inner sum over r is geometric and is min R, h/k 1 k 2 s −1 . Let S a denote the inner sum over the variable a, which is a complete sum. Applying the Chinese remainder theorem many times, we have: where K p (a) = e −aq(k 2 )k 2 (a 2 q(k 2 ) 2 k 2 2 (k 1 s/p) 2 + s 2 ) p .
and W pν is an exponential term modulo p ν whose argument is a similar rational function in a. Since k 2 (k 1 s) ≤ (log x) 50 , a trivial bound for the sums on the a (mod p ν ) when p ν | k 2 (k 1 s) is sufficient, yielding Lemma 7. Let P 1 (F p ) be the projective line on F p , and let f : P 1 (F p ) → P 1 (F p ) be a nonconstant rational function. For all u ∈ P 1 (F p ), let v u ( f ) be the order of the pole of f at u if f (u) = ∞ and v u ( f ) = 0 otherwise. Then we have Since K p (a)e(ha/p) has at most 3 poles (including the pole at ∞), which are simple, we have ∑ a (mod p) from which we deduce that |S a | ≤ 6 ω((k 1 s) ) k 1 s(log x) 50 .
Returning to S R , we have obtained which gives the following upper bound for S 5 (x): If we take y 1 = x 1/3 (log x) −100 we obtain S 5 x(log x) −12 , which is enough for the proof of Theorem 2.

The first three sums
Using a method similar to Section 2 above, the solution r of the congruences in the above sums can be written as r = n 1 −(n 1 n 3 ) n 2 n 3 − (2n 1 n 2 ) n 3 n 2 , and therefore the exponential summand in the S i (x) becomes e r n 1 n 2 n 3 = e −n 1 n 3 n 2 − 2n 1 n 2 n 3 .
The error term is O (x/y 2 ) 3/2 (log x) 5 which is sufficiently small if B is large enough, and therefore We handle the sum S 2 (x) in the same way, but this time summing first over n 2 instead of n 1 . Applying the inversion formula (6), we can rewrite equation (19) in the following way: e r n 1 n 2 n 3 = e n 2 n 1 n 3 − 1 n 1 n 2 n 3 − 2n 1 n 2 n 3 = e n 2 (1 − (2n 1 ) n 3 n 1 ) The error term O(1/n 1 n 2 n 3 ) yields a contribution to S 2 (x) that is less than O((log x) 3 ). Then we apply equation (4): We finish in the same way as for S 1 (x), obtaining the same asymptotic formula. For S 3 (x) the corresponding method is to write e r n 1 n 2 n 3 = e n 3 (1 − (n 1 ) n 2 2n 1 ) 2n 1 n 2 + O 1 n 1 n 2 n 3 , and then after applying equation (4) S 3 (x) = ∑ n 1 n 2 ≤x/y 2 max{n 1 ,n 2 }≤y 2 (n 1 ,n 2 )=1 x n 1 n 2 − y 2 µ(2n 1 n 2 ) The corresponding main term this time is Summing these contributions of S 1 (x), S 2 (x), and S 3 (x) in the decomposition at the start of this section, we deduce that
In S 5 (x), since a 2 , a 3 are small we have: The (log z) 2 above comes from the sieving conditions on b 2 and b 3 . In particular, we have used the following inequality ∑ y 1 /a 2 ≤b 2 ≤y 2 /a 2 which can be derived by partial summation from [1, Proposition 1]. The bound (20) is sufficiently small when (log v)(log log x) = o(log z) (we will eventually choose v to be a power of log x). We remark that this step is the main obstacle to having an upper bound less than x/(log x) 2 in the error term in Theorem 3. For S 6 (x), we estimate each summand trivially by 1; therefore we may assume that a 3 > w is abnormally large (the bound when a 2 > w is exactly the same) and that n 2 is unrestricted. Following some ideas of Hooley [12], we note that if a 3 > w, then either ω(a 3 ) ≥ log w/(2 log z), or else there exists d > w 1/4 such that d 2 | a 3 . We therefore have (ignoring here the condition that b 3 has no small prime factors) It remains to handle the term S 7 (x). We begin in the same way as for S 1 (x): Unfortunately, equation (4) is not sufficient here. Since a 3 is not too small and b 3 is not too big, the denominator has three factors not too small and we can apply the recent work of Wu and Xi [20] on the q-analog of the van der Corput method. Such an approach was initiated by Heath-Brown [7] and developed by Graham and Ringrose [6], and more recently by Irving [13,14] and by Wu and Xi [20], where the arithmetic exponent pairs are obtained when the denominator has good factorization properties. As in the proof of Theorem 2, we denote by n and n the squarefull and squarefree parts of the integer n; and we write Lemma 8. Uniformly for any integers α, A, N, δ ∈ N, q = q 1 q 2 q 3 squarefree integer such that (αδ , q) = 1 and any rational function R with integer coefficients, we have We emphasize that the implicit constant above is absolute, and in particular does not depend on R: we handle the contribution of this term R(n)/δ quite trivially. In our application, the denominator δ is small, so the prospects for cancellation are modest in any case.
We prove Lemma 8 in the next section; assuming the lemma for the moment, we can complete the proof of Theorem 3. We apply Lemma 8 to the inner sum in equation (23) with q = n 2 b 3 a 3 and δ = n 2 b 3 a 3 , where N = x/(n 2 a 3 b 3 ). After doing so, by positivity we may again assume that v < a 3 ≤ w (the bound when v < a 2 ≤ w is exactly the same) and ignore all restrictions upon n 2 . We obtain We now have to compute all the different sums:  [20] cannot be applied directly in our context because we need a more precise version of the function N ε in our error bounds. Careful attention to their paper reveals that it is possible to adapt some arguments to replace this N ε by a quantity of the type C ω(q 1 ) (log N) α . In many circumstances such a refinement is not necessary, but for us it is important due to the very restricted range of the factor a 3 . For brevity we will write J = ]A, A + N], W (n) = e(R(n)/δ ), and for the sum to be estimated. We begin by remarking that we may assume that q 2 < N and q 3 < N, for otherwise the lemma is trivial. For any function Ψ and any h ∈ Z we define Lemma 9. Let q = q 1 q 2 with (q 1 , q 2 ) = 1, J = ]A, A + N] an interval and Ψ i : Z/q i Z → C. Then for This formula, which Wu and Xi call an A-process by analogy with the A-process of the classical van der Corput method, follows from the proof of [20,Lemma 3.1].
Writing L 3 = [N/q 3 ] (which is at least 1), this gives (see also the beginning of the proof of [20, Theo- with where 1 J( 3 ) is the indicator function of J( 3 ). We again apply Lemma 9 to each U( 3 ), writing L 2 = [N/q 2 ] (which again is at least 1); we have written ψ 3 (n) as in equation (26) so as to make this choice of L 2 valid even when the length of J( 3 ) is much smaller than N. We obtain with now where J( 2 , 3 ) is some interval contained in J( 3 ), and Then we complete the above sum over n ∈ J( 2 , 3 ): We denote by Σ a (h) the inner sum on a in the second term and perform the same manipulations as for the sums S a in the proof of Theorem 2, resulting in The function F in equation (28) can be rewritten in the following way, with λ = αq 2 q 3 : where G(n) is a polynomial with constant term 2 q 2 3 q 3 ( 2 q 2 + 3 q 3 ) (the exception being when p | ( 2 q 2 + 3 q 3 ), in which case we actually have F(n) = 2λ 2 q 2 3 q 3 /(n(n + 2 q 2 )(n + 3 q 3 ))). If p 2 3 , the function F(vδ q 1 /p) + hv of v has at most 5 poles, each pole being simple (including the pole at ∞); this is most clearly seen from the definition (28) of F(n). Then by Lemma 7 we have p ∑ v=1 e F(vδ q 1 /p) + hv p ≤ 10 √ p when p 2 3 . We deduce from equation (29) that For the sum on 2 we have for any 3 ≤ L 3 : to finish the proof of Lemma 8.

Generalization of Lemma 8, the A k -process
In this section, we indicate how to iterate the ideas of the proof of Lemma 8 to obtain bounds for short exponential sums whose denominator can be decomposed into k + 1 factors. This generalization relies on techniques from two important papers of Irving [13,14]; again, our contribution here consists mainly in replacing a factor of N ε by a more precise error term. Essentially, we would like to apply Lemma 9 consecutively k times. Irving has given a precise formulation of this iteration; to enounce his result, we need to introduce some notation corresponding to the iterates of the ∆ h (Ψ) used in the previous section. For any complex-valued function f , define where σ (S) denotes that the complex conjugate is taken when #S is odd. With this notation, we may quote [14, Lemma 2.2]: Lemma 10. Let k ∈ N and q 0 , . . . , q k ∈ N. For each 0 ≤ i ≤ k, let f i : Z → C be a function with period q i such that f i (n) 1. Set q = q 0 · · · q k and f (n) = ∏ k i=0 f i (n). If I is any interval of length at most N, then where I(h 1 , . . . , h k ) is a subinterval of I.
We also quote the following combinatorial lemma [13, Lemma 4.5]: Lemma 11. Let p ≥ 3 be prime and h 1 , . . . , h k ∈ F p . Suppose that for every b ∈ F p , the number of subsets S ⊂ {1, . . . , k} with b = ∑ s∈S h s is even. Then some h i must equal 0.
We are now prepared to establish our generalization of Lemma 8.
Lemma 12. Uniformly for any integers α, A, N, any positive integers δ , k, any positive integers q 0 , · · · , q k such that q = q 0 · · · q k is squarefree and coprime to αδ , and any rational function R with integer coefficients, Proof. We apply Lemma 10 with where as in the previous section W (n) = e(R(n)/δ ). Our desire at this point is to apply Weil's bound to the sums over n that arise from equation (32). After the same manipulations as in the proof of Lemma 8, the analogue of equation (29) is now In order to apply Weil's bound (Lemma 7), we need to confirm that the argument of the exponential is nonconstant modulo p (even if p | m).
Note that the numerator in equation (33) can be written as When p h 1 · · · h k (so that indeed p h 1 q 1 · · · h s q s ), Lemma 11 implies that at least one of the inner sums on the right-hand side of equation (34) has an odd number of terms, and in particular (by considering its parity) is nonzero. In particular, the numerator in equation (33) is nonconstant, and thus Lemma 7 can be applied.

Product of k linear factors
This section is devoted to the proof of Theorem 4. We will skip some details when the arguments are similar to the previous proofs. All implicit constants in this section may depend on f and h. Define A = ∏ 1≤i< j≤k (a i b j − a j b i ) ∏ k i=1 a i . The first step follows the beginning of the proof of Theorem 1 in Section 3.2. However, since f has more than two linear factors, the discussions related to the greatest common divisor of A, h, and the denominators n are more delicate. This is why in our splitting analogous of (10), the summation of δ will be for δ | (hA) ∞ instead of δ | (h, A) ∞ . We keep the notations S( f , x, h) = S > (x, h) + S ≤ (x, h), where in S > (x, h) the parameter δ | (hA) ∞ exceeds B, but now with B = (log x) 10k instead of x 1/5−ε . We bound S > (x, h) by x(log x) k−1 B −1+ε with ε > 0 arbitrarily small as in equation (11) (indeed ε = 1/10 will suffice for us).
For S ≤ (x, h), however, an analogous version of equation (12) is not sufficient. This is due to the fact that a trivial summation on δ < B would bring a factor B > (log x) k into our error terms, and we can win only a factor (log x) 4 in an error term (denoted by T (1) 5 (r, δ ) later in this section) arising from certain denominators n that are the product of several divisors of the same size.
The number of roots r in equation (35) is O(A ω(δ ) ) (see Nagell [17, p. 90, Theorem 54] for an even more precise result) and thus O(1) with our conventions for implicit constants. Since our polynomial f is the product of k linear functions, generalizations of Lemma 3 and equation (6) We set y = x 1/3 (log x) 10k and write T (r, δ ) = T 1 (r, δ )+T 2 (r, δ ), where the sum T 2 (α, δ ) contains precisely those summands for which max{m 1 , . . . , m k } > y. We decompose T 2 (r, δ ) = S 1 + · · · + S k into k sums, where each S i is defined by the conditions m i > y and m j ≤ y for j < i. In each S i , we use the inversion formula (6) and apply Lemma 1 as in the proof of Theorem 3. This yields an asymptotic formula of the shape S i = xC i ( f , h, δ ) + O((x/y) 3/2 √ δ (log x) k ) for some constants C i ( f , h, δ ) similar to the constants found in Section 3.2 in the proof of Theorem 1.
These contributions comprise the main term in Theorem 4 when k = 3 and k = 4. It remains to find an upper bound for T 1 (r, δ ). This upper bound will be o(x) only for k = 3 and k = 4, which allows for our asymptotic formula in those cases; for k ≥ 5 it provides only a nontrivial upper bound as stated in Theorem 4.
We split the sum T 1 (r, δ ) into k! subsums according to the ordering of the m i . Let T 1 (r, δ ) denote the subsum of T 1 (r, δ ) with the additional condition that m 1 ≥ m 2 ≥ · · · ≥ m k ; the estimate we find for this subsum will hold for all k! subsums. We would like to deal with T (1) 1 (r, δ ) simply by applying Lemma 12; however, this lemma is not efficient enough when m 3 is close to m 1 . Consequently, we make one more splitting T