Posted by blueraspberryesketimine in I2P (edited by a moderator )
cumlord wrote
Reply to comment by blueraspberryesketimine in Getting started in I2P by blueraspberryesketimine
i'm not sure about SAM since this is qbit, but with I2CP running either biglybt or snark can be glitchy on separate machines especially with i2pd, java seems to handle random disconnects better where i2pd might not recover, possibly due to latency. As far as i know I2CP is intended to be used on the same machine. you can do this but it runs much better with java routers from what i've found where i think i2pd is best if you keep it on the same machine.
possibly things to check - trackers are working since no dht, in a good swarm, tunnel quantity/number of hops. like are peers available or is it a throughput issue
blueraspberryesketimine OP wrote
I better isolated the i2pd machine on my network just in case something goes wrong with it and I don't notice right away. While doing so, I noticed roughly half the connections to the i2p relay port are being blocked by my firewall. Strangely, the firewall is set to allow all on that port. It says it's blocking based on ingress firewall's IP filtering rules.
What rules? I didn't give it any rules. If it's unsolicited, it's blocked, but the i2p relay is requesting those connections so the firewall shouldn't be blocking them, right?
cumlord wrote
i don't know what you did as far as containerizing/vm but i'd expect it's got something to do with that assuming there isn't something upstream blocking it. i2p routers will work best opening the TCP/UDP port so it will allow incoming connections
blueraspberryesketimine OP wrote
I decided to try running i2p+ on the same equipment as a comparison to see if it works better for me than i2pd. I have some issues with it.
First, I can't get it to use the wrapper. I'm running it in alpine linux aarch64. Looking at the i2prouter script, it doesn't seem to have any way to handle aarch64, though interestingly it does still have the older ARM architectures in the script. I suspect this is why it doesn't want to use the wrapper, even though the wrapper itself does support aarch64. I was able to work around this temporarily with runplain.sh but it's not quite ideal as I'd like to allocate more ram to i2p+. I also want to get jbigi loaded in, but I suspect the wrapper might be needed for that to work anyway.
Anyway, my findings so far in comparing the two on this aarch64 relay:
- i2pd is way faster to bring up and tear down, though we expected that
- i2pd uses next to no ram.
- i2pd is rocket fast but then seems to eventually stop responding to http after being used for a while
- i2p+ is heavy, but not as bad as I thought it would be. A diskless alpine system is running quite happily at less than 1G of ram used. Seeing as this board has 4GB on it, I still have some room to test further after I can allocate more to the JVM after fixing the wrapper.
- i2p+ is pretty! :)
- i2p+ definitely has a higher tunnel success rate the i2pd but it also takes a lot longer to get that high. It's camping out at 83% now. I never got that high with i2pd.
- i2p+ creates significantly fewer tunnels than i2pd. i2pd would have over 6600 tunnels created at times, just giving away all the bandwidth I had to offer it and coming nowhere near taxing the CPU or memory available on the host. i2p+ seems much more conservative in how it participates with the network. Whereas i2pd would build fast, it would also shed a lot of its tunnels whereas i2p+ can maintain connections better. I suspect I could improve that behavior in i2pd by assigning limits but I'm still feeling this thing out, trying to find where the limits are.
cumlord wrote (edited )
i assumed which is why i brought up the i2cp thing earlier, but wasn't sure if you had it in some other container or something in the other machine that'd be blocking connections, must've had something going on with the firewall somewhere
weird about the wrapper, never tried it on alpine linux so maybe there's a workaround or the i2prouter script could be modified. jbigi i've had to compile to get it to work right at least with i2p+ sometimes. if you don't see libjbigi.so in your /i2p directory then you'd just need to compile it
the devs are around here, quickest answer to get the wrapper to work right would be to pop in to irc2p
pretty good breakdown, if you end up messing around in both you'll find they can be good for different things. i2p+ is more selective and wants to put resources to things like service tunnels, it happens to be very good for hosting things in i2p and if you want to do other stuff on top of torrents/eepsites. i2pd is bare bones and uses little resources, usually very fast if tunnel build success is good, good for torrenting. it has its own trade offs. i watch the memory usage on that one closely. I like i2pd a lot for certain things but i've learned you do need to be careful with it at times and set conservative limits
i2p+ will usually see build success +70%, i2pd should hang somewhere around 30-50, lower with floodfill. In practice though i2pd should be running great at 30-50, but if it drops under 10 you get problems.
blueraspberryesketimine OP wrote
I wonder why i2pd has a lower rate than i2p+. Does it just have a different way of evaluating that metric?
I'm continuing to experiment with this and torrents. I migrated it back to my server and tried taking it out of the container since I was running the container rootless when I first tried it there. rootless podman containers supposedly have issues with UDP connections. The torrenting speed in qbittorrent and snark are a little better, but still topping out at only about 200k so there's still more room for tweaking here.
Right now, things seem more stable with I2P+ than I2PD, but I'm blaming that squarely on my own ignorance in how to properly tune this setup.
I'm really excited to see how that emissary project grows too, but I'm not sure I trust it just yet. I'm going to wait for others who actually know what they are doing to vet the project as well as SSU2 support to finish before I give it a try.
cumlord wrote
i'm not completely sure but i'd hazard a guess that it could have something to do with i2p+ being more selective in it's peer profiling compared to i2pd and that java routers use different bids for NTCP2 and SSU2
emissary is super exciting, just kinda showed up out of nowhere
blueraspberryesketimine OP wrote
Its actually running on a separate physical device. I wanted to put in the media server itself, but my container network skills aren't great and that server get taken down from time to time for me to mess with. Uptime matters here, so it made sense to keep i2p separated from the server.
Viewing a single comment thread. View all comments