history and cultures... back to the bibles Setup for grpo = no slidingwindow = null ( usal full 128 context window )...
Current iteration tested !
Working well : in fact science amd reasoning is very good !
the problem needing to be solved is -
Sometimes a quick answer is required ! and not reasoning ! ( but i found that reasoning also happens when the model may have either multiple outputs available or no pretrained outputs available : so it generates an answer using past methodologys trained ... ie chain of thoughts methods to solve complex math ! SO it may not have seen the actual sum before ! but it can manage the individual components so it will reason its way to the answer , but can be very long and very long wait!.... It may seemnot to be solving but it will give a final answer just continue its output... ) Perhaps longer conversations ... ie multi turn conversations ... enableing for discussion instead of self reasoning as it seems to have lost its discussive capablitys ! I also found that this show the gaps in knowledge !
I find that once it has given an answer it does not wish to expand the answer or seek an alturnative it seems to stay wiht its original answer ? ( how to change this .... usually with more datasets with other methods or sumaried answers , not short ground truths as some answers could be answered in multiple ways so grpo training must be a followup training for new simple q&A or alpaca styled ! )----
SLiding window has been disabled from the unsloth ! as it will not grpo train :
Also a new datset combined with the old ( mixed-reasoning and chat ad ground truth) and some newer science and ground truth , perhaps a few (multiple choice (1%)) as i find the last datset seems to be its main output agenda , but it still learns from the earlier trains ::
This model is a stage on the way to ontology officer ! here it will be highly trained on some medical as well as dna and genus info : I find that understanding the history and genome of animals and insects ... as well as other language tasks , and nlp tasks ! the previous science officer was mainly math and technical knowledge as well as Math ... and programming ....its predecossor was abusiness model ( performing business tasks as well as deeper long reasoning)
All parent models had been trained on chain of thoughts and forest of thoughts etc before transformign the colelction to thining models ( i suspect it was already a thining model byu the ui's have just caught up to the structured outputs so now they can be displayed !) all is deception !
- Downloads last month
- 7
Model tree for LeroyDyer/_Spydaz_Web_ONTOLOGY_OFFICER_
Unable to build the model tree, the base model loops to the model itself. Learn more.