Recommended setup for own sync server

Bug report

I am struggling to set up my own sync server for Inkdrop. I am using CouchDB with SSL on a Raspberry Pi. This small server is running on my home network behind a NAT router. The connection from Inkdrop client to CouchDB server is working fine, but synchronization never stops and no rows are written to the database. After a few minutes Inkdrop occasionally crashes.

To my knowledge there is no prepackaged version of CouchDB available for Raspbian, so I built from source. It is starting up fine and taking requests, but Inkdrop fails to sync with it. I tried both major versions of CouchDB. Server logs are supplied below.

Info

Client

  • Platform: Fedora
  • Platform version: 27
  • App Version: 3.18.0

Server

  • Hardware: Raspberry Pi 1 Model B
  • Platform: Raspbian armf
  • Platform version: Jessie
  • CouchDB version: 2.1.1 / 1.7.1
  • Erlang version: 1.17.3

Logs

On CouchDB v2.1.1 there are error notices pointing to mochiweb_request.erl. Otherwise the logs are similar to version 1. I forgot to save them, but can reproduce if needed.

Downgrading to CouchDB v1.7.1 the following queries run in a loop with the same ids, just as with version 2. No error logs this time, but nothing gets ever synced either.

[Wed, 14 Mar 2018 13:39:42 GMT] [info] [<0.151.0>] xxx.xxx.xxx.xxx - - GET /my-inkdrop-notes/_local/3aMvgnrVzoY491rYLaL_Hw%3D%3D? 404
[Wed, 14 Mar 2018 13:39:42 GMT] [info] [<0.157.0>] xxx.xxx.xxx.xxx - - GET /my-inkdrop-notes/_local/UedJQedwQBSPcBjCYJu47w%3D%3D? 200
[Wed, 14 Mar 2018 13:39:43 GMT] [info] [<0.145.0>] xxx.xxx.xxx.xxx - - GET /my-inkdrop-notes/_changes?timeout=24000&style=all_docs&heartbeat=10000&since=0&limit=100 200
[Wed, 14 Mar 2018 13:39:43 GMT] [info] [<0.152.0>] xxx.xxx.xxx.xxx - - GET /my-inkdrop-notes/_local/UedJQedwQBSPcBjCYJu47w%3D%3D? 200
[Wed, 14 Mar 2018 13:39:43 GMT] [info] [<0.156.0>] xxx.xxx.xxx.xxx - - POST /my-inkdrop-notes/_revs_diff 200
[Wed, 14 Mar 2018 13:39:43 GMT] [info] [<0.5362.1>] xxx.xxx.xxx.xxx - - POST /my-inkdrop-notes/_bulk_docs 201
[Wed, 14 Mar 2018 13:39:44 GMT] [info] [<0.5369.1>] xxx.xxx.xxx.xxx - - GET /my-inkdrop-notes/_changes?timeout=24000&style=all_docs&feed=longpoll&heartbeat=10000&since=0&limit=100 200

Here is the result of a networked GET request to my database on the remote:

{"db_name":"my-inkdrop-notes","doc_count":0,"doc_del_count":0,"update_seq":1,"purge_seq":0,"compact_running":false,"disk_size":454738,"data_size":540,"instance_start_time":"1521146676125385","disk_format_version":6,"committed_update_seq":1}

Considerations

I also tried to sync to localhost on the client and it was working fine. Admin party and no SSL obviously. In contrast to Raspbian, Fedora actually ships with a CouchDB package.
Further I suspected the /db/_bulk_docs query to be problematic on my remote machine as its Content-Length appeared to be rather large with 8 MB and my connection is sometimes unreliable. Sending equally large JSON via curl however gets posted to the remote db successfully.
No errors on the Developer console in Inkdrop.

Please help me fix the problem with my setup. Reports of other self-hosted solutions are also welcome! Thank you

Hi Hoopy,

If you disabled SSL on your CouchDB, does it work with no problem?
Because the CouchDB’s built-in SSL support is kind of buggy.
Please follow this instructions to enable SSL using nginx:

https://doc.inkdrop.info/manual/synchronizing-in-the-cloud#enable-ssl

It is the same setup as Inkdrop so it should work.

Hi craftzdog,

disabling SSL on my remote CouchDB instance works! So the built-in SSL support indeed causes some problems.

Next I tried terminating SSL with nginx and using it as a reverse proxy to connect upstream to CouchDB. Unfortunately the configuration in the docs does not work for me. The problem lies with _bulk_docs whenever it gets too large. Nginx reports status code 400 in access.log along with client prematurely closed connection in error.log.

Looking at couch.log, it appears the request never reaches CouchDB.

Even with SSL disabled nginx chokes on _bulk_docs. I get status code 499 with the following error: epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while connecting to upstream.
If I enable proxy_ignore_client_abort, I can make it work, but with SSL enabled I still get the odd 400 error. I tried adjusting all kinds of timeout and buffer options, but nothing works.

I understand this problem is out of scope for you, so please don’t bother with it. Just wanted to give you the latest update.

This appears to be more general problem. Sync does not work with Inkdrop sync server either.

At one point I inserted a couple of images into a single note. This might be how I created this edge case.

That’s weird.
I’d like to know about two things:

  1. Does it work on your local machine with the same setup?
    • If worked, it is a network problem.
  2. What if you changed nginx config like so:
    location ~ ^/(.*)/(_changes|_bulk_docs) {
      proxy_pass http://localhost:5984;
      proxy_redirect off;
      proxy_buffering off;
      proxy_set_header Host $host;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Ssl on;
    }
    
    I guess the client disconnects because of no response from the server since proxy_buffering is true by default.

Does it work on your local machine with the same setup?

It works locally using nginx as a reverse proxy together with self-signed certs. I started Inkdrop with NODE_TLS_REJECT_UNAUTHORIZED=0 to accept the certificate on localhost.

Update: It also works with letsencrypt certs and redirecting my domain to localhost via /etc/hosts

$ nginx -V
nginx version: nginx/1.12.1
built by gcc 7.1.1 20170802 (Red Hat 7.1.1-7) (GCC) 
built with OpenSSL 1.1.0f-fips  25 May 2017 (running with OpenSSL 1.1.0g-fips  2 Nov 2017)
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-http_auth_request_module --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E'

What if you changed nginx config like so:

Makes no difference unfortunately.

I switched from my home setup to AWS cloud hosting to eliminate any NAT and connection issues, but the problem remains. It runs CentOS7 with nginx/1.12.2 (Update: built with OpenSSL 1.0.2k-fips 26 Jan 2017) and couchdb/2.1.1. I am using the config you suggested except for bigger client_max_body_size.
Still I am getting status code 400 and additionally 408 for _bulk_docs queries when trying to sync. I tried Apache with mod_proxy_http as well, but it also does not work.

This is when I finally decided to sync against your server. :tired_face:

Here is my nginx error.log in debug mode for that particular request in case it helps:

2018/03/20 12:29:20 [debug] 6210#6210: *20 accept: xxx.xxx.xxx.xxx:37028 fd:3
2018/03/20 12:29:20 [debug] 6210#6210: *20 event timer add: 3: 60000:1126997996
2018/03/20 12:29:20 [debug] 6210#6210: *20 reusable connection: 1
2018/03/20 12:29:20 [debug] 6210#6210: *20 epoll add event: fd:3 op:1 ev:80002001
2018/03/20 12:29:20 [debug] 6210#6210: *20 http check ssl handshake
2018/03/20 12:29:20 [debug] 6210#6210: *20 http recv(): 1
2018/03/20 12:29:20 [debug] 6210#6210: *20 https ssl handshake: 0x16
2018/03/20 12:29:20 [debug] 6210#6210: *20 tcp_nodelay
2018/03/20 12:29:20 [debug] 6210#6210: *20 SSL server name: "sync.example.com"
2018/03/20 12:29:20 [debug] 6210#6210: *20 SSL NPN advertised
2018/03/20 12:29:20 [debug] 6210#6210: *20 SSL_do_handshake: -1
2018/03/20 12:29:20 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:20 [debug] 6210#6210: *20 reusable connection: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL handshake handler: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_do_handshake: 1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL reused session
2018/03/20 12:29:21 [debug] 6210#6210: *20 reusable connection: 1
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 posix_memalign: 003F3EB0:256 @16
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 003FBF08:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 003FBF08
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 003FBF08:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 003FBF08
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 0044F2E8:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 0044F2E8
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 0044F2E8:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 0044F2E8
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 0044F2E8:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 0044F2E8
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 0044F2E8:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 free: 0044F2E8
2018/03/20 12:29:21 [debug] 6210#6210: *20 http wait request handler
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 0044F2E8:1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: 1024
2018/03/20 12:29:21 [debug] 6210#6210: *20 reusable connection: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 posix_memalign: 0044F6F0:4096 @16
2018/03/20 12:29:21 [debug] 6210#6210: *20 http process request line
2018/03/20 12:29:21 [debug] 6210#6210: *20 http request line: "POST /my-inkdrop-notes/_bulk_docs HTTP/1.1"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http uri: "/my-inkdrop-notes/_bulk_docs"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http args: ""
2018/03/20 12:29:21 [debug] 6210#6210: *20 http exten: ""
2018/03/20 12:29:21 [debug] 6210#6210: *20 http process request header line
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "Authorization: Basic SECRET=="
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "Accept: application/json"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "Content-Type: application/json"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "host: sync.example.com:6984"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "content-length: 8577673"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header: "Connection: close"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http header done
2018/03/20 12:29:21 [debug] 6210#6210: *20 event timer del: 3: 1126997996
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 rewrite phase: 1
2018/03/20 12:29:21 [debug] 6210#6210: *20 test location: "/"
2018/03/20 12:29:21 [debug] 6210#6210: *20 test location: ~ "^/(.*)/(_changes|_bulk_docs)"
2018/03/20 12:29:21 [debug] 6210#6210: *20 using configuration "^/(.*)/(_changes|_bulk_docs)"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http cl:8577673 max:20971520
2018/03/20 12:29:21 [debug] 6210#6210: *20 rewrite phase: 3
2018/03/20 12:29:21 [debug] 6210#6210: *20 post rewrite phase: 4
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 5
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 6
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 7
2018/03/20 12:29:21 [debug] 6210#6210: *20 access phase: 8
2018/03/20 12:29:21 [debug] 6210#6210: *20 access phase: 9
2018/03/20 12:29:21 [debug] 6210#6210: *20 access phase: 10
2018/03/20 12:29:21 [debug] 6210#6210: *20 post access phase: 11
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 12
2018/03/20 12:29:21 [debug] 6210#6210: *20 generic phase: 13
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body preread 789
2018/03/20 12:29:21 [debug] 6210#6210: *20 http request body content length filter
2018/03/20 12:29:21 [debug] 6210#6210: *20 http body new buf t:1 f:0 0044F3D3, pos 0044F3D3, size: 789 file: 0, size: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 malloc: 004506F8:8192
2018/03/20 12:29:21 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: 8192
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body recv 8192
2018/03/20 12:29:21 [debug] 6210#6210: *20 http body new buf t:1 f:0 004506F8, pos 004506F8, size: 8192 file: 0, size: 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 http write client request body, bufs 00450420
2018/03/20 12:29:21 [debug] 6210#6210: *20 add cleanup: 004505D0
2018/03/20 12:29:21 [debug] 6210#6210: *20 hashed path: /var/cache/nginx/client_temp/0000000001
2018/03/20 12:29:21 [debug] 6210#6210: *20 temp fd:12
2018/03/20 12:29:21 [warn] 6210#6210: *20 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000001, client: xxx.xxx.xxx.xxx, server: sync.example.com, request: "POST /my-inkdrop-notes/_bulk_docs HTTP/1.1", host: "sync.example.com:6984"
2018/03/20 12:29:21 [debug] 6210#6210: *20 writev: 12, 8981, 0
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: 7168
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body recv 7168
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body rest 8568692
2018/03/20 12:29:21 [debug] 6210#6210: *20 event timer add: 3: 60000:1126998378
2018/03/20 12:29:21 [debug] 6210#6210: *20 http finalize request: -4, "/my-inkdrop-notes/_bulk_docs?" a:1, c:2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http request count:2 blk:0
2018/03/20 12:29:21 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body recv -2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body rest 8568692
2018/03/20 12:29:21 [debug] 6210#6210: *20 event timer: 3, old: 1126998378, new: 1126998396
2018/03/20 12:29:21 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:21 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:21 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body recv -2
2018/03/20 12:29:21 [debug] 6210#6210: *20 http client request body rest 8568692
2018/03/20 12:29:21 [debug] 6210#6210: *20 event timer: 3, old: 1126998378, new: 1126998402
[...]
2018/03/20 12:29:31 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv -2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body rest 6946676
2018/03/20 12:29:31 [debug] 6210#6210: *20 event timer: 3, old: 1127008359, new: 1127008651
2018/03/20 12:29:31 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv -2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body rest 6946676
2018/03/20 12:29:31 [debug] 6210#6210: *20 event timer del: 3: 1127008359
2018/03/20 12:29:31 [debug] 6210#6210: *20 event timer add: 3: 60000:1127008700
2018/03/20 12:29:31 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: -1
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_get_error: 2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv -2
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body rest 6946676
2018/03/20 12:29:31 [debug] 6210#6210: *20 event timer: 3, old: 1127008700, new: 1127008702
2018/03/20 12:29:31 [debug] 6210#6210: *20 http run request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http read client request body
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: 1024
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv 1024
2018/03/20 12:29:31 [debug] 6210#6210: *20 http body new buf t:1 f:0 004506F8, pos 004506F8, size: 8192 file: 0, size: 0
2018/03/20 12:29:31 [debug] 6210#6210: *20 http write client request body, bufs 004504D0
2018/03/20 12:29:31 [debug] 6210#6210: *20 write: 12, 004506F8, 8192, 1630997
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: 8192
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv 8192
2018/03/20 12:29:31 [debug] 6210#6210: *20 http body new buf t:1 f:0 004506F8, pos 004506F8, size: 8192 file: 0, size: 0
2018/03/20 12:29:31 [debug] 6210#6210: *20 http write client request body, bufs 004504D0
2018/03/20 12:29:31 [debug] 6210#6210: *20 write: 12, 004506F8, 8192, 1639189
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: 7168
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_read: 0
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_get_error: 5
2018/03/20 12:29:31 [debug] 6210#6210: *20 peer shutdown SSL cleanly
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv 7168
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body rest 6930292
2018/03/20 12:29:31 [debug] 6210#6210: *20 http client request body recv 0
2018/03/20 12:29:31 [info] 6210#6210: *20 client prematurely closed connection, client: xxx.xxx.xxx.xxx, server: sync.example.com, request: "POST /my-inkdrop-notes/_bulk_docs HTTP/1.1", host: "sync.example.com:6984"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http finalize request: 400, "/my-inkdrop-notes/_bulk_docs?" a:1, c:1
2018/03/20 12:29:31 [debug] 6210#6210: *20 http terminate request count:1
2018/03/20 12:29:31 [debug] 6210#6210: *20 http terminate cleanup count:1 blk:0
2018/03/20 12:29:31 [debug] 6210#6210: *20 http posted request: "/my-inkdrop-notes/_bulk_docs?"
2018/03/20 12:29:31 [debug] 6210#6210: *20 http terminate handler count:1
2018/03/20 12:29:31 [debug] 6210#6210: *20 http request count:1 blk:0
2018/03/20 12:29:31 [debug] 6210#6210: *20 http close request
2018/03/20 12:29:31 [debug] 6210#6210: *20 http log handler
2018/03/20 12:29:31 [debug] 6210#6210: *20 run cleanup: 004505D0
2018/03/20 12:29:31 [debug] 6210#6210: *20 file cleanup: fd:12
2018/03/20 12:29:31 [debug] 6210#6210: *20 free: 004506F8
2018/03/20 12:29:31 [debug] 6210#6210: *20 free: 0044F6F0, unused: 98
2018/03/20 12:29:31 [debug] 6210#6210: *20 close http connection: 3
2018/03/20 12:29:31 [debug] 6210#6210: *20 SSL_shutdown: 1
2018/03/20 12:29:31 [debug] 6210#6210: *20 event timer del: 3: 1127008700
2018/03/20 12:29:31 [debug] 6210#6210: *20 reusable connection: 0
2018/03/20 12:29:31 [debug] 6210#6210: *20 free: 0044F2E8
2018/03/20 12:29:31 [debug] 6210#6210: *20 free: 0040DAA0, unused: 8
2018/03/20 12:29:31 [debug] 6210#6210: *20 free: 003F3EB0, unused: 184

Update: Nginx version details for above error.log on Raspberry Pi

pi@raspberrypi ~ $ nginx -V
nginx version: nginx/1.13.9
built by gcc 4.9.2 (Raspbian 4.9.2-10+deb8u1) 
built with OpenSSL 1.0.1t  3 May 2016
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_v2_module --with-ipv6 --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-debug

Update 2: Updating nginx to use OpenSSL 1.1.0g (same as Fedora on localhost) does not solve the problem. Still getting the same SSL_get_error: 5 just before the connection is closed.

I looked at the network layer and it is in fact the client that continually closes the connection. It happens around the 10 sec. mark. I think it might be a problem with PouchDB.

Disabling SSL on my remote CouchDB instance works! So the built-in SSL support indeed causes some problems.

Also I have to correct that statement above. I have been testing on Fedora and Windows side-by-side and it turns out the client behaves differently for each platform.

Windows

  • Synchronization works for HTTP. Connections live as long as they need to. Works with nginx as reverse proxy and direct connections to CouchDB
  • The client closes the connection after 10 sec. when HTTPS is used.

Fedora

  • The client closes the connection after 10 sec. regardless of HTTP or HTTPS.

Sounds irrational, but maybe somebody can confirm. It should be the same when used with the original Inkdrop sync server as I suspect it to be client related.

Sending equally large JSON via curl however gets posted to the remote db successfully.

This earlier finding validates my theory.

Looking at couch.log, it appears the request never reaches CouchDB.

I also found the nginx (>1.7.11) directive proxy_request_buffering that controls client buffering. When turned off the arrival of _bulk_docs requests show up in couch.log as well.

Thank you for the detailed report.
I’m afraid that I still don’t get the cause of the problem.

Could you check if any error logs are shown on the app?
You can toggle Developer Tools from menu -> Developer -> Toggle Developer Tools.
The app also outputs some logs to terminal if you ran it from terminal.

During the initial sync, it sends a lot of docs at once via _bulk_docs. Defaults to 100. It might be too large for your network (or the app?).
So I guess it’d be good to reduce the batch size.
I’ll make it configurable in the next version.

Thanks!

I just released v3.19.0.
From this version you can configure the sync options in config.cson like so:

"*":
  core:
    db:
      sync:
        options:
          push:
            batch_size: 1
            batches_limit: 1

The location of config.cson is here:

  • on macOS: ~/Library/Application Support/inkdrop/config.cson
  • on Windows: %APPDATA%/inkdrop/config.cson
  • on Linux: ~/.config/inkdrop/config.cson

You can specify PouchDB’s replication options from core.db.sync.options.
In this example, it specifies two options:

  • core.db.sync.options.push.batch_size:
    • Number of change feed items to send at a time (Defaults to 100)
  • core.db.sync.options.push.batches_limit:
    • Number of batches to process at a time (Defaults to 10)

Specifying them as 1 means that you send docs one by one in each HTTP request.
Let’s see if it works on your environment.
Thanks for your cooperation!

Specifying them as 1 means that you send docs one by one in each HTTP request.
Let’s see if it works on your environment.

That solves the problem for me!

Anybody reading this trying to set up their own sync server, just refer to the original documentation.
The problem was client related!

It might be just a workaround for the actual problem though. I think there is still a timeout in place by the PouchDB client that cuts the connection after 10 seconds. Decreasing batch_size and batch_limit keep the _bulk_docs payload small and requests can complete within that 10 sec. window.

Increasing core.db.sync.options.push.timeout alone does not fix the issue. Upon reading the PouchDB docs on Options for remote databases, I suspect it might be necessary to additionally pass a larger ajax.timeout when declaring the sync target.

For now I am very happy with the ability to configure the sync options in my config.cson! Great approach to solve the problem! Thanks for the quick fix. :clap:

That’s good to hear!
And great insight, Hoopy.
From my brief investigation, PouchDB@5.4.5 uses request for making ajax requests. I found the timeout option is set to 10000 by default:

So I think your assumption is right and ajax.timeout should be at least 60 sec.
I’ll change it in the next release.

Thanks!

  • timeout - integer containing the number of milliseconds to wait for a
    server to send response headers (and start the response body) before aborting
    the request. Note that if the underlying TCP connection cannot be established,
    the OS-wide TCP connection timeout will overrule the timeout option (the
    default in Linux can be anywhere from 20-120 seconds
    ).

I dont understand why the client would trigger the timeout while it is still busy sending the request. Just wanted to caution you that this might again not be the cause for the sync problems. :crazy_face:

Yeah, right. So I’d like to write a troubleshooting that tells people to decrease batch_size and batch_limit.

v3.20.0 is out, which has 60 secs timeout for request.
Thanks again for your report!