Gator007 wrote:Has the oo7 branch been released?
Yes
https://www.chess.com/forum/view/game-analysis/freeware-against-commercial-chess-software-arena-3-5-1-vs-chessbase-fritz-17?page=1
Gator007 wrote:Has the oo7 branch been released?
AlexChess wrote:ProteusSF JBE 007 +3 vs Crystal 220504
Android 8 CPUS 400 Kn/s - 100 games - ProteusBook Test
ALL PGN GAMES: https://pixeldrain.com/u/fzogeDcz
l3rilliance wrote:Proteus like author says he has not made it from stockfish 14.1 but it is made from Shashchess. Just once create engine proteus and in first screen you can see the authors kiniama and manzo who work on shashchess. Also compare source code files with shashchess you guys can see that.
Proteus Chess SF - developing my Stockfish 14.1 derivative. - you are not doing that instead
Proteus Chess SF - developing my Shashchess 22 derivative. - this is what you are doing
You can have a look at the screenshot and get to know the truth https://we.tl/t-Vy3hPGKFyu
deeds wrote:On the first positions of a game, even if Stockfish runs until D80 or deeper, nothing prove this game will end in 80 plies and the opponent may play different moves as those expected by the stockfish pov. So in sum, without a learning feature, the engine has no experience on the whole game, no experience on the usual endgames, no experience on the opponent playing style, no true WDL ratio, etc. Stockfish plays stronger at d80 than at d30 but if the game ends in 140 plies, his evaluation is very limited. A trained engine doesn't need d40 or deeper to outplay sf.
l3rilliance wrote:My point is there is lot of difference between stockfish 14.1 and shashchess. So tell whatever it is really based on.
deeds wrote:A trained engine :
- 500 games by opening where the engine showed weaknesses. Use few opponents that have a learning feature.
- lost games analyzed at 10min/ply when the average game's duration is about 10min, 1hr/ply when 1hr/game, etc.
- avoid q-learning at learning when you don't use q-learning at tourney else the experience data are worst than default engine. Q-learning needs thousands games by opening, it learns very very slow.
Homayoun wrote:Hi Alex. I use windows binary. ( bmi2) proteus plays very well and has very good evaluation (due to it’s basic brainlearn evaluation ). Now it’s very nice engine for analysis and practice. Areyou sure you want to change its brain learn base? You know better that there are many other stockfish derivatives which are very strong for engines competition but because of lacking of good and human like evaluation they are not suitable for human practice and analysis. Fortunately now proteus 007 is suitable and appropriate for both usage. I don’t want to interfere more but I think your decision needs more checking and review. May be you can solve the android version issues and it would be not necessary to change the base of engine which will cause to change its good playing style and
Evaluation.
Best regards
AlexChess wrote:Thank you deeds. Sarona was right, your method is really good. I'm using your 250 mb Eman.exp taken from @Anton01 FB page for next Proteus version (June release) and it is doing really well!!!
deeds wrote:AlexChess wrote:Thank you deeds. Sarona was right, your method is really good. I'm using your 250 mb Eman.exp taken from @Anton01 FB page for next Proteus version (June release) and it is doing really well!!!
So the next Proteus will use an experience file calibrated only for the Eman's weaknesses at 3m+2s on 40t (mostly B openings). This EXP file contains Eman's evaluations that are different than BrainLearn's and have totally nothing to do with the Q-learning's. Furthermore, the experience data aren't stored/loaded in the same format/way between an experience.bin file and an eman.exp file.
Good luck to find the same thresholds/settings/logics as those used by Khalid in the Eman learning's code, even MZ didn't get the same in HypnoS. By mixing experience data like this, you replace the evaluation of Proteus by that of another hyper trained engine so you have finally admitted to us that Proteus' evaluation is way off the mark.
Homayoun wrote:I only know that now proteus 007 has one of the most human like evaluations like the engines sugar Iccf ,brainlearn , sashchess and crystal. These engines never overvaluate any positions resulted from different openings and also play different openings very well even without use an accessory bin book.
Homayoun wrote:I only know that now proteus 007 has one of the most human like evaluations like the engines sugar Iccf ,brainlearn , sashchess and crystal. These engines never overvaluate any positions resulted from different openings and also play different openings very well even without use an accessory bin book.
Homayoun wrote:I only know that now proteus 007 has one of the most human like evaluations like the engines sugar Iccf ,brainlearn , sashchess and crystal. These engines never overvaluate any positions resulted from different openings and also play different openings very well even without use an accessory bin book.
|
|