Ultracopier remain fastest on my NAS against Windows and Windows remain with bug into some caseUltracopier est toujours aussi puissant comparé à Windows avec mon NAS et Windows a toujours certain bug de copie dans certain cas
Et c’est avec le moteur qui permet pause/limitation de vitesse, sans ces fonctionnalité Ultracopier peu aller encore plus vite!
J’essaye de gagner ma vie comme je peu (Je gagne actuellement <300€/mois en bossant au moins 80h/semaine alors que les dev de mon niveau fond plus de 5000€/mois). Entre les responsabilités j’ai presque plus le temps de coder.
Pour essayer de sortir de cette situation, j’ai modernisé tout le domaine:
Fin de petit projet que personne ne connais
Fermeture des projets morts (dépassé ou pas exploité à temps)
Séparation des projets par VM, pour fermeture facile, maintenance, sécurité
Fermeture des parties non vitales comme forums, …
Mise en statique de certain wiki
Hors de cela, je créé une chaîne de production, en parallèle de l’activité de Datacenter de Confiared qui est déjà vraiment top au niveau technologique. Sans parler de mes obligation personnel. J’essayerai cette années d’avoir plus de temps pour le code.
Hi, to improve the service into Confiared I’m rewritting the CDN software.
We was with Nginx + Nginx FastCGI cache + PHP (to just proxy the reply). This solution lack fine cache control tuning, some bug due to Nginx cache.
Then I have rewritten the CDN as standalone FastCGI server, the cache is directly controlled by the server. If same url is already downloading, then the content is send from the partial downloaded content. I choose monothread to have great performance without thread coherency code (more simple to dev, more eficience if the code is very fast because if the code is very fast most of the time is consumed into thread management and data migration from a CPU to another CPU).
The code is specific, not flexible and generalist. I parse the protocols (DNS, FastCGI, …) on fly. That’s greatly improve the performance, reduce the memory. An internal page is served 3x more faster than a simple « Hello world » into PHP 7.4.
The future improvement are: better cache, cache some stuff where needed (DNS, …), use io_uring to improve file access and be 3x more fast than Nginx with static file, do profile to optimize the code. (And maybe do my own http server)
What is the problem with other Architecture for Linux and developer?
When you have code abstraction (as Python, Java, C#), porting is transparent then easy to support another architecture. But lot of software in the system are in C/C++, with old code, that’s mean: Each time where you have system specific or architecture specific code, you need adapt to new system/architecture. For 32Bits, in C/C++, some dirty code cast pointer to integer, that’s generate problem on 32/64 barrier, added to less popular each years then less maintained by the owner of each project, …
64Bits support is too hard than do generic support with clean code.
About obsolescence, the ARM quit 32Bits into the last 10 years. I have lot of running hardware in 32Bits. It’s fully functional, very good for the assigned task.
In this context: Every body try hard to just save few % of resources for the earth. We don’t have control on big company then forgot support the last android on 5 years phone. But generic architecture support, it easy (yes not optimized, and?) and prevent:
Manufacturing other hardware (less wast, less wars for rare resources)
Manufacturing hardware with compatibility, then more silicon, less performance
Shipping (resources and problem)
Take lot of time for person and company to buy other hardware, change it, configure it, fix for the new hardware
At performance, 32Bits on x86 it’s more slow but for the software with large pointer usage, it use less memory. And the performance is not the target to everyone.
I have finish implement cache into HPS to reduce memory at server startup and startup time for server and client (usefull for phone or where the CPU is slow)
EDIT: How do the cache sync with datapack:
Scan at startup the datapack, create checksum, if checksum match the cache then load the cache. Problem: it’s slow mostly on slow disk and FS because need access to all inode, can greatly slow down the cache
Never check the datapack change, regen the cache manually. More performance, but not adapted to everyone. Need be intergrate where you update your datapack
Why?
Load single file, not multiple, less pressure on file system
No decoding file format, it’s only unserialize
No decoding value from string to int
No endian change
Only used information is kept, no useless data is loaded
Hi, to dev CatchChallenger I had a problem in QWidget: I had low performance on android, 9FPS
My mix QGraphicsView + QWidget is not supported if I enable OpenGL on android and on other platform it do bug.
Then I have tested lot of stuff, QGraphicsView + OpenGL + native widget for 2D game have correct performance: 60FPS on all platform with <6% of CPU on android (and cortex A53). Then Qt via QGraphicsView seam 100% ready for game in 2019.
For webassembly and android: no windows manager, then I need remake all in one windows.
How I had pass from 8H to compile my catchchallenger cluster nodes to 5min?
Firstly be sure the only minimal header is included into your sources files
I use same OS on all my nodes on same architecture, compile on one node by devices, not on all nodes (lower the concurrency) and copy the binary. Very lower memory pressure, not use swap
With this I have passed from -j1 compilation to -j9, then it’s very more powerfull too. And finally my time to compile is very low.
After C10k (10 000 concurrency connections), C10M (10 000 000 concurrency connections), now C10B is near (10 Billions concurrent connections, 10Tbps, 10 billions packets per seconds, 1 billion connections/seconds). In my case: 1 Billion concurrent connections, 1Tbps, 1 billions packets per seconds, 1 millions connections/seconds)
I have test it with CatchChallenger, with 32 core Threadripper 2990WX , 256GB DDR4, Radeon RX Vega 64.
This experiment can be used for high speed network packet processing for ethernet 100G+ or infinityband device. I have do as R&D for my company router with 1Tbps (at 64B packet size) routing capability certified with benchmark.
I have do stateless filter on IPv6 input, each processing unit after do the state-full work, but with an special dispatching memory access to reduce the cache miss. Mean: IP/TCP processing GPU, CatchChallenger processing CPU.
Difficulty: Very limited into memory, I had used specialized swap technique with barer to delay some class of trafic (move on map: 95% of move, + not published protocol with another vector move to prevent memory access and only parse last pos with need to use previous data), bulk processing of this.
Result:
59ms average reply time
95% 342ms reply time
Burn 1.2Tbps of network bandwidth
I have already finish the support of http3 for Confiared, but not pushed in prod because I need check more it.
The next week I will work on it, for Americ South (as Bolivia), one of interesting part is the http3 is more RTT insensitive. Mean: for https, it wait very less time before start to download the web page.
It will be firstly enable on IPv4 reverse proxy for our VPS and hosting, to get it on your servers.