-
Notifications
You must be signed in to change notification settings - Fork 2k
Closed
Labels
L1 Very fewLikelihoodLikelihoodOS-synologySynology NAS devicesSynology NAS devicesP2 AggravatingPriority levelPriority levelT8 CrashIssue typeIssue typebugBugBug
Description
What is the issue?
in the first few days , tailscale was working pretty good, then suddenly stopped working, and never could start it again.
I have tried to start it on the web , or synosystemctl , or systemctl nor one is work
stdout log
2022-09-30T16:41:33+08:00 Starting tailscale with: /volume1/@appstore/Tailscale/bin/tailscaled --state=/volume1/@appdata/Tailscale/tailscaled.state --socket=/volume1/@appdata/Tailscale/tailscaled.sock --port=41641
SIGILL: illegal instruction
PC=0x485a80 m=0 sigcode=1
instruction bytes: 0x0 0x6 0x38 0xd5 0xe0 0x7 0x0 0xf9 0xc0 0x3 0x5f 0xd6 0x0 0x0 0x0 0x0
goroutine 1 [running, locked to thread]:
golang.org/x/sys/cpu.getisar0()
golang.org/x/sys@v0.0.0-20220715151400-c0bba94af5f8/cpu/cpu_arm64.s:14 fp=0x4000065510 sp=0x4000065510 pc=0x485a80
golang.org/x/sys/cpu.readARM64Registers()
golang.org/x/sys@v0.0.0-20220715151400-c0bba94af5f8/cpu/cpu_arm64.go:65 +0x2c fp=0x4000065550 sp=0x4000065510 pc=0x48530c
golang.org/x/sys/cpu.doinit()
golang.org/x/sys@v0.0.0-20220715151400-c0bba94af5f8/cpu/cpu_linux_arm64.go:38 +0x24 fp=0x4000065560 sp=0x4000065550 pc=0x4855f4
golang.org/x/sys/cpu.archInit(...)
golang.org/x/sys@v0.0.0-20220715151400-c0bba94af5f8/cpu/cpu_arm64.go:45
golang.org/x/sys/cpu.init.0()
golang.org/x/sys@v0.0.0-20220715151400-c0bba94af5f8/cpu/cpu.go:199 +0x20 fp=0x4000065570 sp=0x4000065560 pc=0x484990
runtime.doInit(0xf17560)
runtime/proc.go:6321 +0x128 fp=0x40000656b0 sp=0x4000065570 pc=0x56c38
runtime.doInit(0xf19700)
runtime/proc.go:6298 +0x68 fp=0x40000657f0 sp=0x40000656b0 pc=0x56b78
runtime.doInit(0xf1dde0)
runtime/proc.go:6298 +0x68 fp=0x4000065930 sp=0x40000657f0 pc=0x56b78
runtime.doInit(0xf243a0)
runtime/proc.go:6298 +0x68 fp=0x4000065a70 sp=0x4000065930 pc=0x56b78
runtime.doInit(0xf24520)
runtime/proc.go:6298 +0x68 fp=0x4000065bb0 sp=0x4000065a70 pc=0x56b78
runtime.doInit(0xf295e0)
runtime/proc.go:6298 +0x68 fp=0x4000065cf0 sp=0x4000065bb0 pc=0x56b78
runtime.doInit(0xf251c0)
runtime/proc.go:6298 +0x68 fp=0x4000065e30 sp=0x4000065cf0 pc=0x56b78
runtime.doInit(0xf26760)
runtime/proc.go:6298 +0x68 fp=0x4000065f70 sp=0x4000065e30 pc=0x56b78
runtime.main()
runtime/proc.go:233 +0x1f8 fp=0x4000065fd0 sp=0x4000065f70 pc=0x48bb8
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x4000065fd0 sp=0x4000065fd0 pc=0x7a824
goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:363 +0xe4 fp=0x4000054fa0 sp=0x4000054f80 pc=0x48fe4
runtime.goparkunlock(...)
runtime/proc.go:369
runtime.forcegchelper()
runtime/proc.go:302 +0xb4 fp=0x4000054fd0 sp=0x4000054fa0 pc=0x48e74
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x4000054fd0 sp=0x4000054fd0 pc=0x7a824
created by runtime.init.6
runtime/proc.go:290 +0x24
goroutine 3 [GC sweep wait]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
runtime/proc.go:363 +0xe4 fp=0x4000055770 sp=0x4000055750 pc=0x48fe4
runtime.goparkunlock(...)
runtime/proc.go:369
runtime.bgsweep(0x0?)
runtime/mgcsweep.go:278 +0xa4 fp=0x40000557b0 sp=0x4000055770 pc=0x33ed4
runtime.gcenable.func1()
runtime/mgc.go:178 +0x28 fp=0x40000557d0 sp=0x40000557b0 pc=0x28338
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x40000557d0 sp=0x40000557d0 pc=0x7a824
created by runtime.gcenable
runtime/mgc.go:178 +0x70
goroutine 4 [GC scavenge wait]:
runtime.gopark(0x4000074000?, 0xa6ec78?, 0x1?, 0x0?, 0x0?)
runtime/proc.go:363 +0xe4 fp=0x4000055f50 sp=0x4000055f30 pc=0x48fe4
runtime.goparkunlock(...)
runtime/proc.go:369
runtime.(*scavengerState).park(0xf753a0)
runtime/mgcscavenge.go:389 +0x5c fp=0x4000055f80 sp=0x4000055f50 pc=0x31ecc
runtime.bgscavenge(0x0?)
runtime/mgcscavenge.go:617 +0x44 fp=0x4000055fb0 sp=0x4000055f80 pc=0x32434
runtime.gcenable.func2()
runtime/mgc.go:179 +0x28 fp=0x4000055fd0 sp=0x4000055fb0 pc=0x282d8
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x4000055fd0 sp=0x4000055fd0 pc=0x7a824
created by runtime.gcenable
runtime/mgc.go:179 +0xb4
goroutine 5 [finalizer wait]:
runtime.gopark(0x600000000045a0?, 0x0?, 0x8?, 0x61?, 0x2000000000?)
runtime/proc.go:363 +0xe4 fp=0x4000054580 sp=0x4000054560 pc=0x48fe4
runtime.goparkunlock(...)
runtime/proc.go:369
runtime.runfinq()
runtime/mfinal.go:180 +0x128 fp=0x40000547d0 sp=0x4000054580 pc=0x27558
runtime.goexit()
runtime/asm_arm64.s:1172 +0x4 fp=0x40000547d0 sp=0x40000547d0 pc=0x7a824
created by runtime.createfing
runtime/mfinal.go:157 +0x94
r0 0x1
r1 0x400010bef0
r2 0x0
r3 0xa73740
r4 0x400010bef0
r5 0x0
r6 0x1
r7 0x0
r8 0x0
r9 0x30
r10 0x400010bef0
r11 0x30
r12 0x0
r13 0x90529a
r14 0x400010bf1f
r15 0x1000
r16 0x40000643a0
r17 0xa
r18 0x0
r19 0x110
r20 0x4000065430
r21 0xf75fc0
r22 0x4000004000
r23 0x2788d060f2
r24 0x20a3edd96057445c
r25 0x131cee05f4fe6b9a
r26 0xf17598
r27 0xfa6a4e
r28 0x40000021a0
r29 0x4000065508
lr 0x48530c
sp 0x4000065510
pc 0x485a80
fault 0x0
manual execute the command , everything is perfect
ash-4.4# /volume1/@appstore/Tailscale/bin/tailscaled --state=/volume1/@appdata/Tailscale/tailscaled.state --socket=/volume1/@appdata/Tailscale/tailscaled.sock --port=41641
logtail started
Program starting: v1.30.2-t118545749, Go 1.19.1-tsb13188dd36: []string{"/volume1/@appstore/Tailscale/bin/tailscaled", "--state=/volume1/@appdata/Tailscale/tailscaled.state", "--socket=/volume1/@appdata/Tailscale/tailscaled.sock", "--port=41641"}
LogID: 【masked data】
logpolicy: using system state directory "/var/lib/tailscale"
logpolicy.ConfigFromFile /var/lib/tailscale/tailscaled.log.conf: open /var/lib/tailscale/tailscaled.log.conf: no such file or directory
logpolicy.Config.Validate for /var/lib/tailscale/tailscaled.log.conf: config is nil
wgengine.NewUserspaceEngine(tun "tailscale0") ...
setting link attributes: setsockopt: protocol not available
router: v6nat = false
dns: [rc=unknown ret=direct]
dns: using *dns.directManager
link state: interfaces.State{defaultRoute=eth0 ifs={docker0:[172.17.0.1/16] eth0:[【masked data】/24]} v4=true v6=false}
magicsock: disco key = d:【masked data】
Creating WireGuard device...
Bringing WireGuard device up...
external route: up
Bringing router up...
Clearing router settings...
Starting link monitor...
Engine created.
synology Taildrop support: shared folder "Taildrop" not found
Start
using backend prefs for "_daemon": Prefs{ra=false dns=false want=true routes=[] nf=off url="【masked data】" Persist{【masked data】"}}
Backend: logs: be:【masked data】 fe:
control: client.Login(false, 0)
health("overall"): error: not in map poll
control: doLogin(regen=false, hasUrl=false)
control: control server key from 【masked data】: ts2021=, legacy=[o85xS]
control: RegisterReq: onode= node=[O2kjW] fup=false
control: RegisterReq: got response; nodeKeyExpired=false, machineAuthorized=true; authURL=false
active login:【masked data】
Switching ipn state NoState -> Starting (WantRunning=true, nm=true)
magicsock: SetPrivateKey called (init)
wgengine: Reconfig: configuring userspace WireGuard config (with 0/15 peers)
wgengine: Reconfig: configuring router
wgengine: Reconfig: configuring DNS
dns: Set: {DefaultResolvers:[] Routes:{} SearchDomains:[] Hosts:16}
dns: Resolvercfg: {Routes:{} Hosts:16 LocalDomains:[]}
dns: OScfg: {Hosts:[] Nameservers:[] SearchDomains:[] MatchDomains:[]}
monitor: RTM_NEWROUTE: src=【masked data】/0, dst=【masked data】/32, gw=, outif=13, table=255
peerapi: serving on http://[【masked data】::5]:44445
peerapi: serving on http://【masked data】:42545
portmapper: UPnP meta changed: {Location:http://【masked data】:5000/rootDesc.xml Server:OpenWRT/OpenWrt UPnP/1.1 MiniUPnPd/2.0 USN:uuid:【masked data】::urn:schemas-upnp-org:device:InternetGatewayDevice:1}
magicsock: home is now derp-10 (sea)
magicsock: adding connection to derp-10 for home-keep-alive
control: NetInfo: NetInfo{varies= hairpin= ipv6=false udp=false icmpv4=false derp=#10 portmap=active-UMC link=""}
magicsock: 1 active derp conns: derp-10=cr0s,wr0s
derphttp.Client.Connect: connecting to derp-10 (sea)
magicsock: endpoints changed: 【masked data】129:41642 (portmap), 【masked data】:41641 (local), 172.17.0.1:41641 (local), 【masked data】:41641 (local)
Switching ipn state Starting -> Running (WantRunning=true, nm=true)
magicsock: derp-10 connected; connGen=1
health("overall"): ok
Steps to reproduce
No response
Are there any recent changes that introduced the issue?
No response
OS
Synology
OS version
DSM7
Tailscale version
v1.30.2-t118545749
Bug report
No response
stephenrlouie
Metadata
Metadata
Assignees
Labels
L1 Very fewLikelihoodLikelihoodOS-synologySynology NAS devicesSynology NAS devicesP2 AggravatingPriority levelPriority levelT8 CrashIssue typeIssue typebugBugBug