Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loss increases during the traning #10

Open
AmmarKamoona opened this issue Sep 5, 2019 · 4 comments
Open

Loss increases during the traning #10

AmmarKamoona opened this issue Sep 5, 2019 · 4 comments

Comments

@AmmarKamoona
Copy link

I am trying to do training but the loss function increases rather than decrease

I have attached the following training part
___________________________________code start
for cl_idx, video_id in enumerate(dataset.train_videos):

        # Run the train video 
        dataset.train(video_id)
        loader = DataLoader(dataset, collate_fn=dataset.collate_fn)

        # Build score containers
        #sample_llk = np.zeros(shape=(len(loader) + t - 1,))
        #sample_rec = np.zeros(shape=(len(loader) + t - 1,))
        ##uploading the ground truth
        #sample_y = dataset.load_test_sequence_gt(video_id)
        for i, (x, y) in tqdm(enumerate(loader), desc=f'Computing scores for {dataset}'):
            optimizer.zero_grad()
            x = x.to('cuda')
            # Forward pass, get our logits then backward pass, then update weights
            x_r, z, z_dist = model(x)
             # Calculate the joint loss 
            loss=criterion(x, x_r, z, z_dist)
            #print(loss)
            ## do backward and the update
            loss.backward()
            optimizer.step()
            running_loss += loss.item()  
            print (running_loss)
@DavideA
Copy link
Contributor

DavideA commented Sep 5, 2019

Hi,

what optimizer are you using?
Secondly, I see you are printing the running loss (the sum of the loss of the batches so far). Are you sure the loss is actually increasing?

@AmmarKamoona
Copy link
Author

Hi There,
I am using Adam optimizer with learning rate set to=0.001.
epochs is set to 30. the loss function decrease and then increase

the loss is= tensor(14281.7119, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 94it [06:52, 4.37s/it]
the loss is= tensor(14346.3477, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 95it [06:56, 4.38s/it]
the loss is= tensor(15669.5996, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 96it [07:00, 4.38s/it]
the loss is= tensor(15653.5596, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 97it [07:05, 4.38s/it]
the loss is= tensor(15492.5547, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 98it [07:09, 4.36s/it]
the loss is= tensor(16204.8232, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 99it [07:13, 4.37s/it]
the loss is= tensor(15876.1279, device='cuda:0', grad_fn=)
Computing scores for ShanghaiTech (video id = 01_019): 100it [07:18, 4.37s/it]
the loss is= tensor(16464.8418, device='cuda:0', grad_fn=)

@weibienao
Copy link

@AmmarKamoona HI, can you share the complete code for training in Shanghaitech dataset?
best wishes

@MStumpp
Copy link

MStumpp commented Sep 19, 2019

@AmmarKamoona Have you been successful with the training? We just started experimenting with the code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants